Test Report: Docker_Linux_containerd_arm64 21997

                    
                      4e6ec0ce1ba9ad510ab2048b3373e13c9f965153:2025-12-05:42642
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 506.47
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.23
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.17
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.28
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.22
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.14
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.08
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.72
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.26
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.32
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.63
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.42
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.1
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 94.7
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.07
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.25
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.57
358 TestKubernetesUpgrade 789.72
435 TestStartStop/group/no-preload/serial/FirstStart 510.77
437 TestStartStop/group/newest-cni/serial/FirstStart 512.95
438 TestStartStop/group/no-preload/serial/DeployApp 3.2
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 98.19
441 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 116.1
444 TestStartStop/group/no-preload/serial/SecondStart 372.02
447 TestStartStop/group/newest-cni/serial/SecondStart 375.43
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 541.88
452 TestStartStop/group/newest-cni/serial/Pause 10.35
486 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 274.19
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (506.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1205 06:15:45.660813    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:01.801022    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:29.506724    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.020046    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.026485    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.037975    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.059353    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.100718    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.182165    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.343673    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.665348    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:15.307400    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:16.588773    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:19.150120    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:24.271867    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:34.513504    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:54.995222    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:20:35.957831    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:57.880048    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:23:01.797781    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m25.010620683s)

                                                
                                                
-- stdout --
	* [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - HTTP_PROXY=localhost:40155
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:40155 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001210532s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123814s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123814s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 6 (334.137346ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 06:23:33.398923   48229 status.go:458] kubeconfig endpoint: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ ssh            │ functional-226068 ssh sudo cat /etc/ssl/certs/41922.pem                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ ssh            │ functional-226068 ssh sudo cat /usr/share/ca-certificates/41922.pem                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ ssh            │ functional-226068 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image save kicbase/echo-server:functional-226068 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image rm kicbase/echo-server:functional-226068 --alsologtostderr                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image save --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format short --alsologtostderr                                                                                                     │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format yaml --alsologtostderr                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format json --alsologtostderr                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format table --alsologtostderr                                                                                                     │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ ssh            │ functional-226068 ssh pgrep buildkitd                                                                                                                           │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ image          │ functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr                                                          │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ delete         │ -p functional-226068                                                                                                                                            │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ start          │ -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:15:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:15:08.085680   42237 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:15:08.085787   42237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:15:08.085791   42237 out.go:374] Setting ErrFile to fd 2...
	I1205 06:15:08.085795   42237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:15:08.086057   42237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:15:08.086476   42237 out.go:368] Setting JSON to false
	I1205 06:15:08.087263   42237 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3455,"bootTime":1764911853,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:15:08.087323   42237 start.go:143] virtualization:  
	I1205 06:15:08.091834   42237 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:15:08.095712   42237 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:15:08.095807   42237 notify.go:221] Checking for updates...
	I1205 06:15:08.103071   42237 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:15:08.106468   42237 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:15:08.109678   42237 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:15:08.112864   42237 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:15:08.116063   42237 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:15:08.119351   42237 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:15:08.152457   42237 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:15:08.152570   42237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:15:08.207578   42237 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-05 06:15:08.198429499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:15:08.207681   42237 docker.go:319] overlay module found
	I1205 06:15:08.210970   42237 out.go:179] * Using the docker driver based on user configuration
	I1205 06:15:08.214116   42237 start.go:309] selected driver: docker
	I1205 06:15:08.214126   42237 start.go:927] validating driver "docker" against <nil>
	I1205 06:15:08.214138   42237 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:15:08.214872   42237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:15:08.279305   42237 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-05 06:15:08.270632196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:15:08.279443   42237 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:15:08.279663   42237 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:15:08.282757   42237 out.go:179] * Using Docker driver with root privileges
	I1205 06:15:08.285777   42237 cni.go:84] Creating CNI manager for ""
	I1205 06:15:08.285841   42237 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:15:08.285849   42237 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:15:08.285929   42237 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:15:08.289062   42237 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:15:08.292061   42237 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:15:08.295079   42237 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:15:08.298052   42237 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:15:08.298128   42237 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:15:08.318088   42237 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:15:08.318098   42237 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:15:08.348375   42237 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:15:08.543328   42237 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:15:08.543549   42237 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.543639   42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:15:08.543648   42237 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.563µs
	I1205 06:15:08.543660   42237 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:15:08.543676   42237 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.543709   42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:15:08.543712   42237 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 44.177µs
	I1205 06:15:08.543717   42237 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:15:08.543719   42237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:15:08.543726   42237 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.543751   42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:15:08.543755   42237 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 30.81µs
	I1205 06:15:08.543752   42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json: {Name:mkbb98d5b7a6e64e5ab9397a325db089a7d7b14b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:15:08.543761   42237 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:15:08.543770   42237 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.543852   42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:15:08.543856   42237 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 87.09µs
	I1205 06:15:08.543860   42237 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:15:08.543868   42237 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.543893   42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:15:08.543897   42237 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.318µs
	I1205 06:15:08.543901   42237 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:15:08.543912   42237 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:15:08.543909   42237 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.543933   42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:15:08.543934   42237 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.543937   42237 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 29.136µs
	I1205 06:15:08.543952   42237 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:15:08.543960   42237 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.543970   42237 start.go:364] duration metric: took 27.758µs to acquireMachinesLock for "functional-101526"
	I1205 06:15:08.543984   42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:15:08.543988   42237 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.973µs
	I1205 06:15:08.543992   42237 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:15:08.544000   42237 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:15:08.544024   42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:15:08.544028   42237 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.702µs
	I1205 06:15:08.544033   42237 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:15:08.543985   42237 start.go:93] Provisioning new machine with config: &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 06:15:08.544040   42237 cache.go:87] Successfully saved all images to host disk.
	I1205 06:15:08.544044   42237 start.go:125] createHost starting for "" (driver="docker")
	I1205 06:15:08.549544   42237 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1205 06:15:08.549815   42237 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:40155 to docker env.
	I1205 06:15:08.549884   42237 start.go:159] libmachine.API.Create for "functional-101526" (driver="docker")
	I1205 06:15:08.549905   42237 client.go:173] LocalClient.Create starting
	I1205 06:15:08.549977   42237 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 06:15:08.550007   42237 main.go:143] libmachine: Decoding PEM data...
	I1205 06:15:08.550020   42237 main.go:143] libmachine: Parsing certificate...
	I1205 06:15:08.550086   42237 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 06:15:08.550103   42237 main.go:143] libmachine: Decoding PEM data...
	I1205 06:15:08.550113   42237 main.go:143] libmachine: Parsing certificate...
	I1205 06:15:08.550472   42237 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 06:15:08.575886   42237 cli_runner.go:211] docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 06:15:08.575961   42237 network_create.go:284] running [docker network inspect functional-101526] to gather additional debugging logs...
	I1205 06:15:08.575975   42237 cli_runner.go:164] Run: docker network inspect functional-101526
	W1205 06:15:08.592841   42237 cli_runner.go:211] docker network inspect functional-101526 returned with exit code 1
	I1205 06:15:08.592860   42237 network_create.go:287] error running [docker network inspect functional-101526]: docker network inspect functional-101526: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-101526 not found
	I1205 06:15:08.592871   42237 network_create.go:289] output of [docker network inspect functional-101526]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-101526 not found
	
	** /stderr **
	I1205 06:15:08.592966   42237 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:15:08.609111   42237 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a16ec0}
	I1205 06:15:08.609144   42237 network_create.go:124] attempt to create docker network functional-101526 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 06:15:08.609233   42237 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-101526 functional-101526
	I1205 06:15:08.671175   42237 network_create.go:108] docker network functional-101526 192.168.49.0/24 created
	I1205 06:15:08.671201   42237 kic.go:121] calculated static IP "192.168.49.2" for the "functional-101526" container
	I1205 06:15:08.671276   42237 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 06:15:08.687249   42237 cli_runner.go:164] Run: docker volume create functional-101526 --label name.minikube.sigs.k8s.io=functional-101526 --label created_by.minikube.sigs.k8s.io=true
	I1205 06:15:08.703836   42237 oci.go:103] Successfully created a docker volume functional-101526
	I1205 06:15:08.703911   42237 cli_runner.go:164] Run: docker run --rm --name functional-101526-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-101526 --entrypoint /usr/bin/test -v functional-101526:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 06:15:09.262146   42237 oci.go:107] Successfully prepared a docker volume functional-101526
	I1205 06:15:09.262216   42237 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 06:15:09.262350   42237 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 06:15:09.262464   42237 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 06:15:09.318657   42237 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-101526 --name functional-101526 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-101526 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-101526 --network functional-101526 --ip 192.168.49.2 --volume functional-101526:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 06:15:09.620406   42237 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Running}}
	I1205 06:15:09.639845   42237 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:15:09.662996   42237 cli_runner.go:164] Run: docker exec functional-101526 stat /var/lib/dpkg/alternatives/iptables
	I1205 06:15:09.711148   42237 oci.go:144] the created container "functional-101526" has a running status.
	I1205 06:15:09.711167   42237 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa...
	I1205 06:15:09.925808   42237 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 06:15:09.954961   42237 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:15:09.979514   42237 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 06:15:09.979525   42237 kic_runner.go:114] Args: [docker exec --privileged functional-101526 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 06:15:10.044015   42237 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:15:10.071254   42237 machine.go:94] provisionDockerMachine start ...
	I1205 06:15:10.071342   42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:15:10.098421   42237 main.go:143] libmachine: Using SSH client type: native
	I1205 06:15:10.098737   42237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:15:10.098743   42237 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:15:10.099371   42237 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56338->127.0.0.1:32788: read: connection reset by peer
	I1205 06:15:13.248903   42237 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:15:13.248917   42237 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:15:13.248995   42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:15:13.266997   42237 main.go:143] libmachine: Using SSH client type: native
	I1205 06:15:13.267294   42237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:15:13.267302   42237 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:15:13.426392   42237 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:15:13.426477   42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:15:13.444275   42237 main.go:143] libmachine: Using SSH client type: native
	I1205 06:15:13.444583   42237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:15:13.444597   42237 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:15:13.597302   42237 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:15:13.597319   42237 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:15:13.597343   42237 ubuntu.go:190] setting up certificates
	I1205 06:15:13.597351   42237 provision.go:84] configureAuth start
	I1205 06:15:13.597427   42237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:15:13.615106   42237 provision.go:143] copyHostCerts
	I1205 06:15:13.615160   42237 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:15:13.615167   42237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:15:13.615248   42237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:15:13.615339   42237 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:15:13.615343   42237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:15:13.615367   42237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:15:13.615416   42237 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:15:13.615420   42237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:15:13.615440   42237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:15:13.615485   42237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:15:13.898799   42237 provision.go:177] copyRemoteCerts
	I1205 06:15:13.898849   42237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:15:13.898888   42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:15:13.916044   42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:15:14.017456   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:15:14.044736   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:15:14.063296   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:15:14.081456   42237 provision.go:87] duration metric: took 484.070012ms to configureAuth
	I1205 06:15:14.081473   42237 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:15:14.081675   42237 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:15:14.081682   42237 machine.go:97] duration metric: took 4.010417903s to provisionDockerMachine
	I1205 06:15:14.081687   42237 client.go:176] duration metric: took 5.531777807s to LocalClient.Create
	I1205 06:15:14.081703   42237 start.go:167] duration metric: took 5.531819415s to libmachine.API.Create "functional-101526"
	I1205 06:15:14.081709   42237 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:15:14.081719   42237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:15:14.081771   42237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:15:14.081808   42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:15:14.099301   42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:15:14.205397   42237 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:15:14.208761   42237 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:15:14.208780   42237 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:15:14.208789   42237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:15:14.208843   42237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:15:14.208929   42237 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:15:14.209006   42237 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:15:14.209050   42237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:15:14.216927   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:15:14.234804   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:15:14.252605   42237 start.go:296] duration metric: took 170.883342ms for postStartSetup
	I1205 06:15:14.252986   42237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:15:14.270158   42237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:15:14.270415   42237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:15:14.270453   42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:15:14.287138   42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:15:14.387303   42237 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:15:14.391918   42237 start.go:128] duration metric: took 5.847860125s to createHost
	I1205 06:15:14.391933   42237 start.go:83] releasing machines lock for "functional-101526", held for 5.84795753s
	I1205 06:15:14.392007   42237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:15:14.414425   42237 out.go:179] * Found network options:
	I1205 06:15:14.417427   42237 out.go:179]   - HTTP_PROXY=localhost:40155
	W1205 06:15:14.420684   42237 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1205 06:15:14.423546   42237 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1205 06:15:14.426442   42237 ssh_runner.go:195] Run: cat /version.json
	I1205 06:15:14.426492   42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:15:14.426516   42237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:15:14.426597   42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:15:14.446136   42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:15:14.446730   42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:15:14.646638   42237 ssh_runner.go:195] Run: systemctl --version
	I1205 06:15:14.653380   42237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:15:14.657636   42237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:15:14.657703   42237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:15:14.684713   42237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 06:15:14.684740   42237 start.go:496] detecting cgroup driver to use...
	I1205 06:15:14.684777   42237 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:15:14.684849   42237 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:15:14.700821   42237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:15:14.714348   42237 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:15:14.714401   42237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:15:14.731838   42237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:15:14.750375   42237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:15:14.860625   42237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:15:14.984957   42237 docker.go:234] disabling docker service ...
	I1205 06:15:14.985014   42237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:15:15.016693   42237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:15:15.034358   42237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:15:15.165942   42237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:15:15.277220   42237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:15:15.290747   42237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:15:15.304423   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:15:15.312935   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:15:15.321800   42237 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:15:15.321869   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:15:15.330598   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:15:15.339253   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:15:15.347734   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:15:15.356534   42237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:15:15.364637   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:15:15.373515   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:15:15.382173   42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:15:15.390876   42237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:15:15.398369   42237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:15:15.405846   42237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:15:15.514233   42237 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:15:15.607234   42237 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:15:15.607305   42237 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:15:15.611907   42237 start.go:564] Will wait 60s for crictl version
	I1205 06:15:15.611962   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:15.615738   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:15:15.640408   42237 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:15:15.640476   42237 ssh_runner.go:195] Run: containerd --version
	I1205 06:15:15.659995   42237 ssh_runner.go:195] Run: containerd --version
	I1205 06:15:15.683286   42237 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:15:15.686223   42237 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:15:15.702314   42237 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:15:15.706164   42237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:15:15.715794   42237 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:15:15.715892   42237 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:15:15.715947   42237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:15:15.739837   42237 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 06:15:15.739850   42237 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 06:15:15.739897   42237 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:15:15.740099   42237 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:15:15.740182   42237 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:15:15.740255   42237 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:15:15.740331   42237 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:15:15.740397   42237 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 06:15:15.740462   42237 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:15:15.740529   42237 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:15:15.741980   42237 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:15:15.742327   42237 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:15:15.742631   42237 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:15:15.742763   42237 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 06:15:15.742862   42237 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:15:15.742968   42237 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:15:15.743077   42237 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:15:15.743180   42237 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:15:16.093307   42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 06:15:16.093379   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:15:16.114873   42237 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 06:15:16.114908   42237 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:15:16.114958   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:16.118477   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:15:16.142234   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:15:16.155059   42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 06:15:16.155122   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:15:16.172086   42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 06:15:16.172154   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 06:15:16.176601   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:15:16.186285   42237 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 06:15:16.186318   42237 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:15:16.186366   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:16.191422   42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 06:15:16.191486   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:15:16.208716   42237 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 06:15:16.208759   42237 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:15:16.208806   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:16.233883   42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 06:15:16.233969   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 06:15:16.234066   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:15:16.234109   42237 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 06:15:16.234131   42237 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:15:16.234152   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:16.234204   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 06:15:16.242548   42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 06:15:16.242602   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:15:16.246168   42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 06:15:16.246224   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:15:16.254304   42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 06:15:16.254359   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 06:15:16.291393   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 06:15:16.291434   42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 06:15:16.291447   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 06:15:16.291482   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:15:16.291539   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:15:16.350306   42237 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 06:15:16.350342   42237 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:15:16.350388   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:16.357490   42237 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 06:15:16.357522   42237 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 06:15:16.357568   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:16.357613   42237 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 06:15:16.357625   42237 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:15:16.357645   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:16.383324   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:15:16.383392   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:15:16.383452   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 06:15:16.410169   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:15:16.410241   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 06:15:16.410326   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:15:16.517134   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:15:16.517249   42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 06:15:16.517314   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 06:15:16.517350   42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 06:15:16.517403   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 06:15:16.542192   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 06:15:16.542271   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:15:16.542351   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:15:16.574743   42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 06:15:16.574768   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 06:15:16.574821   42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 06:15:16.574891   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 06:15:16.574931   42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 06:15:16.574939   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 06:15:16.657974   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:15:16.658069   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 06:15:16.658295   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:15:16.658352   42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 06:15:16.658366   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 06:15:16.748280   42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 06:15:16.748370   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 06:15:16.748694   42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 06:15:16.748757   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 06:15:16.752957   42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 06:15:16.753053   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 06:15:16.783074   42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 06:15:16.783102   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 06:15:16.783142   42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 06:15:16.783150   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 06:15:16.786128   42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 06:15:16.786153   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 06:15:16.881438   42237 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 06:15:16.881497   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1205 06:15:16.885558   42237 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 06:15:16.885725   42237 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 06:15:16.885782   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:15:17.227948   42237 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 06:15:17.227979   42237 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:15:17.228030   42237 ssh_runner.go:195] Run: which crictl
	I1205 06:15:17.228071   42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 06:15:17.228087   42237 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 06:15:17.228121   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 06:15:18.459841   42237 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.231698172s)
	I1205 06:15:18.459860   42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 06:15:18.459883   42237 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 06:15:18.459932   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 06:15:18.459995   42237 ssh_runner.go:235] Completed: which crictl: (1.231958277s)
	I1205 06:15:18.460022   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:15:19.389944   42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 06:15:19.389964   42237 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 06:15:19.390014   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 06:15:19.390084   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:15:20.381472   42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 06:15:20.381604   42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:15:20.381661   42237 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 06:15:20.381684   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 06:15:21.780949   42237 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.399243244s)
	I1205 06:15:21.780965   42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 06:15:21.780984   42237 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 06:15:21.781030   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 06:15:21.781096   42237 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.399484034s)
	I1205 06:15:21.781117   42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 06:15:21.781198   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 06:15:22.685465   42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 06:15:22.685488   42237 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 06:15:22.685544   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 06:15:22.685567   42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 06:15:22.685594   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 06:15:23.750545   42237 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.064977577s)
	I1205 06:15:23.750577   42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 06:15:23.750594   42237 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 06:15:23.750639   42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 06:15:24.105778   42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 06:15:24.105803   42237 cache_images.go:125] Successfully loaded all cached images
	I1205 06:15:24.105808   42237 cache_images.go:94] duration metric: took 8.365946804s to LoadCachedImages
	I1205 06:15:24.105826   42237 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:15:24.105938   42237 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:15:24.106000   42237 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:15:24.131457   42237 cni.go:84] Creating CNI manager for ""
	I1205 06:15:24.131467   42237 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:15:24.131482   42237 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:15:24.131503   42237 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:15:24.131614   42237 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:15:24.131686   42237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:15:24.139816   42237 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 06:15:24.139870   42237 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:15:24.147770   42237 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 06:15:24.147852   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 06:15:24.147929   42237 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 06:15:24.147960   42237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:15:24.148031   42237 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 06:15:24.148079   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 06:15:24.154890   42237 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 06:15:24.154915   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 06:15:24.168615   42237 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 06:15:24.168638   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 06:15:24.168707   42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 06:15:24.190394   42237 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 06:15:24.190418   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 06:15:24.948025   42237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:15:24.958265   42237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:15:24.971429   42237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:15:24.985707   42237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 06:15:24.999853   42237 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:15:25.007685   42237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:15:25.023162   42237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:15:25.143062   42237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:15:25.161549   42237 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:15:25.161560   42237 certs.go:195] generating shared ca certs ...
	I1205 06:15:25.161585   42237 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:15:25.161732   42237 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:15:25.161777   42237 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:15:25.161783   42237 certs.go:257] generating profile certs ...
	I1205 06:15:25.161835   42237 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:15:25.161844   42237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt with IP's: []
	I1205 06:15:25.526060   42237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt ...
	I1205 06:15:25.526076   42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: {Name:mk0f62cdda76b04469b61f130355c66263b88984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:15:25.526273   42237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key ...
	I1205 06:15:25.526279   42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key: {Name:mkd962052fc981e118f4a3acb328540e925978f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:15:25.526386   42237 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:15:25.526398   42237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt.b6aec90a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 06:15:25.807207   42237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt.b6aec90a ...
	I1205 06:15:25.807222   42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt.b6aec90a: {Name:mkfd06686d862316da89c3d24bf271a94894046c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:15:25.807407   42237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a ...
	I1205 06:15:25.807413   42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a: {Name:mkaece3f89e0518f39d77b16181dc1f1a3bf6684 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:15:25.807494   42237 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt.b6aec90a -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt
	I1205 06:15:25.807576   42237 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key
	I1205 06:15:25.807630   42237 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:15:25.807642   42237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt with IP's: []
	I1205 06:15:26.044159   42237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt ...
	I1205 06:15:26.044180   42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt: {Name:mk500bfabecdb269c991955b2e95c327e15b1277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:15:26.044388   42237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key ...
	I1205 06:15:26.044397   42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key: {Name:mk9a70f3d246740e46dcdee227203c2847fb514f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:15:26.044608   42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:15:26.044652   42237 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:15:26.044660   42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:15:26.044688   42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:15:26.044718   42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:15:26.044742   42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:15:26.044788   42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:15:26.045440   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:15:26.065845   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:15:26.085106   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:15:26.105048   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:15:26.124533   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:15:26.142618   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:15:26.161778   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:15:26.180514   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:15:26.200113   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:15:26.218401   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:15:26.236432   42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:15:26.254626   42237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:15:26.267808   42237 ssh_runner.go:195] Run: openssl version
	I1205 06:15:26.274286   42237 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:15:26.282096   42237 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:15:26.290014   42237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:15:26.294054   42237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:15:26.294112   42237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:15:26.335572   42237 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:15:26.343548   42237 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 06:15:26.351292   42237 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:15:26.359075   42237 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:15:26.367448   42237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:15:26.371524   42237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:15:26.371581   42237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:15:26.414136   42237 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:15:26.421991   42237 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 06:15:26.429677   42237 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:15:26.439161   42237 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:15:26.446961   42237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:15:26.451155   42237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:15:26.451215   42237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:15:26.492773   42237 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:15:26.500523   42237 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:15:26.508204   42237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:15:26.511988   42237 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:15:26.512033   42237 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:15:26.512101   42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:15:26.512161   42237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:15:26.539779   42237 cri.go:89] found id: ""
	I1205 06:15:26.539848   42237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:15:26.547929   42237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:15:26.555769   42237 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:15:26.555822   42237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:15:26.563699   42237 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:15:26.563726   42237 kubeadm.go:158] found existing configuration files:
	
	I1205 06:15:26.563777   42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:15:26.571872   42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:15:26.571928   42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:15:26.579588   42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:15:26.588283   42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:15:26.588337   42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:15:26.596119   42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:15:26.604901   42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:15:26.604958   42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:15:26.613041   42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:15:26.621583   42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:15:26.621636   42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:15:26.629359   42237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:15:26.675413   42237 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:15:26.675612   42237 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:15:26.742455   42237 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:15:26.742525   42237 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:15:26.742566   42237 kubeadm.go:319] OS: Linux
	I1205 06:15:26.742610   42237 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:15:26.742657   42237 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:15:26.742736   42237 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:15:26.742808   42237 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:15:26.742856   42237 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:15:26.742904   42237 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:15:26.742948   42237 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:15:26.742995   42237 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:15:26.743041   42237 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:15:26.811114   42237 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:15:26.811218   42237 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:15:26.811309   42237 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:15:26.816797   42237 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:15:26.825905   42237 out.go:252]   - Generating certificates and keys ...
	I1205 06:15:26.825997   42237 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:15:26.826062   42237 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:15:26.990701   42237 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 06:15:27.147598   42237 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 06:15:27.299024   42237 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 06:15:27.526717   42237 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 06:15:27.604897   42237 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 06:15:27.605064   42237 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:15:28.161195   42237 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 06:15:28.161503   42237 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:15:28.312221   42237 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 06:15:28.399597   42237 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 06:15:29.092420   42237 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 06:15:29.092663   42237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:15:29.175227   42237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:15:29.318346   42237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:15:29.442442   42237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:15:29.761424   42237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:15:30.125354   42237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:15:30.126072   42237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:15:30.129679   42237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:15:30.135534   42237 out.go:252]   - Booting up control plane ...
	I1205 06:15:30.135640   42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:15:30.136530   42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:15:30.137823   42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:15:30.163133   42237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:15:30.163236   42237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:15:30.173579   42237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:15:30.173672   42237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:15:30.173711   42237 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:15:30.320928   42237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:15:30.321041   42237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:19:30.322132   42237 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001210532s
	I1205 06:19:30.322154   42237 kubeadm.go:319] 
	I1205 06:19:30.322250   42237 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:19:30.322446   42237 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:19:30.322625   42237 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:19:30.322633   42237 kubeadm.go:319] 
	I1205 06:19:30.322813   42237 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:19:30.323107   42237 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:19:30.323160   42237 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:19:30.323164   42237 kubeadm.go:319] 
	I1205 06:19:30.327550   42237 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:19:30.328086   42237 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:19:30.328227   42237 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:19:30.328498   42237 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:19:30.328502   42237 kubeadm.go:319] 
	I1205 06:19:30.328642   42237 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 06:19:30.328693   42237 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001210532s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:19:30.328777   42237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:19:30.753784   42237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:19:30.766951   42237 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:19:30.767004   42237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:19:30.774736   42237 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:19:30.774745   42237 kubeadm.go:158] found existing configuration files:
	
	I1205 06:19:30.774794   42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:19:30.782488   42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:19:30.782552   42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:19:30.790443   42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:19:30.798490   42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:19:30.798547   42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:19:30.806344   42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:19:30.814414   42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:19:30.814478   42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:19:30.822387   42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:19:30.830621   42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:19:30.830675   42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:19:30.838428   42237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:19:30.945972   42237 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:19:30.946391   42237 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:19:31.016022   42237 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:23:32.632726   42237 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:23:32.632745   42237 kubeadm.go:319] 
	I1205 06:23:32.632827   42237 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:23:32.636565   42237 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:23:32.636616   42237 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:23:32.636706   42237 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:23:32.636760   42237 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:23:32.636795   42237 kubeadm.go:319] OS: Linux
	I1205 06:23:32.636839   42237 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:23:32.636887   42237 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:23:32.636933   42237 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:23:32.636980   42237 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:23:32.637079   42237 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:23:32.637127   42237 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:23:32.637195   42237 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:23:32.637242   42237 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:23:32.637287   42237 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:23:32.637359   42237 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:23:32.637452   42237 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:23:32.637541   42237 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:23:32.637603   42237 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:23:32.639016   42237 out.go:252]   - Generating certificates and keys ...
	I1205 06:23:32.639097   42237 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:23:32.639155   42237 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:23:32.639226   42237 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:23:32.639282   42237 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:23:32.639346   42237 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:23:32.639396   42237 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:23:32.639455   42237 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:23:32.639512   42237 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:23:32.639581   42237 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:23:32.639648   42237 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:23:32.639683   42237 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:23:32.639734   42237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:23:32.639781   42237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:23:32.639833   42237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:23:32.639882   42237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:23:32.639942   42237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:23:32.639992   42237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:23:32.640072   42237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:23:32.640134   42237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:23:32.641540   42237 out.go:252]   - Booting up control plane ...
	I1205 06:23:32.641651   42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:23:32.641738   42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:23:32.641809   42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:23:32.641926   42237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:23:32.642025   42237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:23:32.642137   42237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:23:32.642226   42237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:23:32.642267   42237 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:23:32.642407   42237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:23:32.642518   42237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:23:32.642587   42237 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001123814s
	I1205 06:23:32.642590   42237 kubeadm.go:319] 
	I1205 06:23:32.642649   42237 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:23:32.642682   42237 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:23:32.642793   42237 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:23:32.642796   42237 kubeadm.go:319] 
	I1205 06:23:32.642907   42237 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:23:32.642941   42237 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:23:32.642973   42237 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:23:32.643027   42237 kubeadm.go:403] duration metric: took 8m6.130999032s to StartCluster
	I1205 06:23:32.643055   42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:23:32.643116   42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:23:32.643295   42237 kubeadm.go:319] 
	I1205 06:23:32.669477   42237 cri.go:89] found id: ""
	I1205 06:23:32.669491   42237 logs.go:282] 0 containers: []
	W1205 06:23:32.669498   42237 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:23:32.669503   42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:23:32.669562   42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:23:32.695194   42237 cri.go:89] found id: ""
	I1205 06:23:32.695208   42237 logs.go:282] 0 containers: []
	W1205 06:23:32.695215   42237 logs.go:284] No container was found matching "etcd"
	I1205 06:23:32.695220   42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:23:32.695276   42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:23:32.719538   42237 cri.go:89] found id: ""
	I1205 06:23:32.719556   42237 logs.go:282] 0 containers: []
	W1205 06:23:32.719563   42237 logs.go:284] No container was found matching "coredns"
	I1205 06:23:32.719569   42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:23:32.719622   42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:23:32.748887   42237 cri.go:89] found id: ""
	I1205 06:23:32.748900   42237 logs.go:282] 0 containers: []
	W1205 06:23:32.748907   42237 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:23:32.748913   42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:23:32.748972   42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:23:32.777759   42237 cri.go:89] found id: ""
	I1205 06:23:32.777772   42237 logs.go:282] 0 containers: []
	W1205 06:23:32.777779   42237 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:23:32.777784   42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:23:32.777841   42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:23:32.802728   42237 cri.go:89] found id: ""
	I1205 06:23:32.802742   42237 logs.go:282] 0 containers: []
	W1205 06:23:32.802749   42237 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:23:32.802762   42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:23:32.802818   42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:23:32.827595   42237 cri.go:89] found id: ""
	I1205 06:23:32.827609   42237 logs.go:282] 0 containers: []
	W1205 06:23:32.827616   42237 logs.go:284] No container was found matching "kindnet"
	I1205 06:23:32.827624   42237 logs.go:123] Gathering logs for container status ...
	I1205 06:23:32.827635   42237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:23:32.855529   42237 logs.go:123] Gathering logs for kubelet ...
	I1205 06:23:32.855546   42237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:23:32.913895   42237 logs.go:123] Gathering logs for dmesg ...
	I1205 06:23:32.913915   42237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:23:32.924927   42237 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:23:32.924943   42237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:23:32.990913   42237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:23:32.983297    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:32.983920    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:32.985604    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:32.986098    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:32.987539    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:23:32.983297    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:32.983920    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:32.985604    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:32.986098    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:32.987539    5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:23:32.990923   42237 logs.go:123] Gathering logs for containerd ...
	I1205 06:23:32.990933   42237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 06:23:33.033093   42237 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123814s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:23:33.033137   42237 out.go:285] * 
	W1205 06:23:33.033222   42237 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123814s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:23:33.033290   42237 out.go:285] * 
	W1205 06:23:33.035445   42237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:23:33.038752   42237 out.go:203] 
	W1205 06:23:33.040339   42237 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123814s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:23:33.040381   42237 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:23:33.040401   42237 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:23:33.041722   42237 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:15:18 functional-101526 containerd[765]: time="2025-12-05T06:15:18.466794282Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:19 functional-101526 containerd[765]: time="2025-12-05T06:15:19.380868960Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 05 06:15:19 functional-101526 containerd[765]: time="2025-12-05T06:15:19.382956520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 05 06:15:19 functional-101526 containerd[765]: time="2025-12-05T06:15:19.391111155Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:19 functional-101526 containerd[765]: time="2025-12-05T06:15:19.391945994Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:20 functional-101526 containerd[765]: time="2025-12-05T06:15:20.372606956Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 05 06:15:20 functional-101526 containerd[765]: time="2025-12-05T06:15:20.374866858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 05 06:15:20 functional-101526 containerd[765]: time="2025-12-05T06:15:20.390438399Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:20 functional-101526 containerd[765]: time="2025-12-05T06:15:20.391926388Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:21 functional-101526 containerd[765]: time="2025-12-05T06:15:21.772952290Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 05 06:15:21 functional-101526 containerd[765]: time="2025-12-05T06:15:21.775274305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 05 06:15:21 functional-101526 containerd[765]: time="2025-12-05T06:15:21.789261652Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:21 functional-101526 containerd[765]: time="2025-12-05T06:15:21.801952108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:22 functional-101526 containerd[765]: time="2025-12-05T06:15:22.675553467Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 05 06:15:22 functional-101526 containerd[765]: time="2025-12-05T06:15:22.677773820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 05 06:15:22 functional-101526 containerd[765]: time="2025-12-05T06:15:22.692521911Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:22 functional-101526 containerd[765]: time="2025-12-05T06:15:22.693377082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:23 functional-101526 containerd[765]: time="2025-12-05T06:15:23.739345342Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 05 06:15:23 functional-101526 containerd[765]: time="2025-12-05T06:15:23.741500504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 05 06:15:23 functional-101526 containerd[765]: time="2025-12-05T06:15:23.764578635Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:23 functional-101526 containerd[765]: time="2025-12-05T06:15:23.765671888Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:24 functional-101526 containerd[765]: time="2025-12-05T06:15:24.096994562Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 05 06:15:24 functional-101526 containerd[765]: time="2025-12-05T06:15:24.099843811Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 05 06:15:24 functional-101526 containerd[765]: time="2025-12-05T06:15:24.107868214Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:15:24 functional-101526 containerd[765]: time="2025-12-05T06:15:24.108162846Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:23:33.992733    5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:33.993428    5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:33.994993    5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:33.995516    5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:23:33.997033    5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:23:34 up  1:06,  0 user,  load average: 0.08, 0.49, 0.68
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:23:30 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 05 06:23:31 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:23:31 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:23:31 functional-101526 kubelet[5288]: E1205 06:23:31.140595    5288 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 05 06:23:31 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:23:31 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:23:31 functional-101526 kubelet[5293]: E1205 06:23:31.885579    5293 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:23:32 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 05 06:23:32 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:23:32 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:23:32 functional-101526 kubelet[5299]: E1205 06:23:32.650700    5299 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:23:32 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:23:32 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:23:33 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 06:23:33 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:23:33 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:23:33 functional-101526 kubelet[5394]: E1205 06:23:33.369594    5394 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:23:33 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:23:33 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 6 (358.550497ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 06:23:34.497796   48449 status.go:458] kubeconfig endpoint: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (506.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1205 06:23:34.512945    4192 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101526 --alsologtostderr -v=8
E1205 06:24:14.019857    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:24:41.722414    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:01.797764    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:29:14.019816    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:29:24.868964    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-101526 --alsologtostderr -v=8: exit status 80 (6m5.100694791s)

                                                
                                                
-- stdout --
	* [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:23:34.555640   48520 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:23:34.555757   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.555768   48520 out.go:374] Setting ErrFile to fd 2...
	I1205 06:23:34.555773   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.556051   48520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:23:34.556413   48520 out.go:368] Setting JSON to false
	I1205 06:23:34.557238   48520 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3961,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:23:34.557311   48520 start.go:143] virtualization:  
	I1205 06:23:34.559039   48520 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:23:34.560249   48520 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:23:34.560305   48520 notify.go:221] Checking for updates...
	I1205 06:23:34.562854   48520 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:23:34.564039   48520 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:34.565137   48520 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:23:34.566333   48520 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:23:34.567598   48520 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:23:34.569245   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:34.569354   48520 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:23:34.590301   48520 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:23:34.590415   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.653386   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.643338894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.653494   48520 docker.go:319] overlay module found
	I1205 06:23:34.655010   48520 out.go:179] * Using the docker driver based on existing profile
	I1205 06:23:34.656153   48520 start.go:309] selected driver: docker
	I1205 06:23:34.656167   48520 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.656269   48520 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:23:34.656363   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.713521   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.704040472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.713916   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:34.713979   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:34.714025   48520 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.715459   48520 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:23:34.716546   48520 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:23:34.717743   48520 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:23:34.719027   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:34.719180   48520 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:23:34.738218   48520 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:23:34.738240   48520 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:23:34.779237   48520 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:23:34.998431   48520 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:23:34.998624   48520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:23:34.998714   48520 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998796   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:23:34.998805   48520 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.154µs
	I1205 06:23:34.998818   48520 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:23:34.998828   48520 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998857   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:23:34.998862   48520 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.504µs
	I1205 06:23:34.998868   48520 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998878   48520 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998890   48520 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:23:34.998904   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:23:34.998909   48520 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.361µs
	I1205 06:23:34.998916   48520 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998919   48520 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998925   48520 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998953   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:23:34.998955   48520 start.go:364] duration metric: took 23.967µs to acquireMachinesLock for "functional-101526"
	I1205 06:23:34.998958   48520 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.961µs
	I1205 06:23:34.998965   48520 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998968   48520 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:23:34.998973   48520 fix.go:54] fixHost starting: 
	I1205 06:23:34.998973   48520 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999001   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:23:34.999006   48520 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.323µs
	I1205 06:23:34.999012   48520 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:23:34.999020   48520 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999055   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:23:34.999060   48520 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.108µs
	I1205 06:23:34.999066   48520 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:23:34.999076   48520 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999117   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:23:34.999122   48520 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.426µs
	I1205 06:23:34.999127   48520 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:23:34.999135   48520 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999162   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:23:34.999167   48520 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.427µs
	I1205 06:23:34.999172   48520 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:23:34.999180   48520 cache.go:87] Successfully saved all images to host disk.
	I1205 06:23:34.999246   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:35.021908   48520 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:23:35.021948   48520 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:23:35.023534   48520 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:23:35.023573   48520 machine.go:94] provisionDockerMachine start ...
	I1205 06:23:35.023662   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.041007   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.041395   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.041419   48520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:23:35.188597   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.188620   48520 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:23:35.188686   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.205143   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.205585   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.205604   48520 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:23:35.361531   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.361628   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.381210   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.381606   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.381630   48520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:23:35.529415   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:23:35.529441   48520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:23:35.529467   48520 ubuntu.go:190] setting up certificates
	I1205 06:23:35.529477   48520 provision.go:84] configureAuth start
	I1205 06:23:35.529543   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:35.549800   48520 provision.go:143] copyHostCerts
	I1205 06:23:35.549840   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549879   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:23:35.549910   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549992   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:23:35.550081   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550102   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:23:35.550111   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550138   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:23:35.550192   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550212   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:23:35.550220   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550244   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:23:35.550303   48520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:23:35.896062   48520 provision.go:177] copyRemoteCerts
	I1205 06:23:35.896131   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:23:35.896172   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.915295   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.022077   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:23:36.022150   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:23:36.041535   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:23:36.041647   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:23:36.060235   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:23:36.060320   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:23:36.078423   48520 provision.go:87] duration metric: took 548.924199ms to configureAuth
	I1205 06:23:36.078451   48520 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:23:36.078638   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:36.078652   48520 machine.go:97] duration metric: took 1.055064213s to provisionDockerMachine
	I1205 06:23:36.078660   48520 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:23:36.078671   48520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:23:36.078720   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:23:36.078768   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.096049   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.200907   48520 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:23:36.204162   48520 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:23:36.204182   48520 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:23:36.204187   48520 command_runner.go:130] > VERSION_ID="12"
	I1205 06:23:36.204192   48520 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:23:36.204196   48520 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:23:36.204200   48520 command_runner.go:130] > ID=debian
	I1205 06:23:36.204205   48520 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:23:36.204210   48520 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:23:36.204232   48520 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:23:36.204297   48520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:23:36.204316   48520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:23:36.204326   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:23:36.204380   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:23:36.204473   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:23:36.204485   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /etc/ssl/certs/41922.pem
	I1205 06:23:36.204565   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:23:36.204573   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> /etc/test/nested/copy/4192/hosts
	I1205 06:23:36.204620   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:23:36.211988   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:36.229308   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:23:36.246073   48520 start.go:296] duration metric: took 167.399532ms for postStartSetup
	I1205 06:23:36.246163   48520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:23:36.246202   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.262461   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.366102   48520 command_runner.go:130] > 13%
	I1205 06:23:36.366647   48520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:23:36.370745   48520 command_runner.go:130] > 169G
	I1205 06:23:36.371285   48520 fix.go:56] duration metric: took 1.372308275s for fixHost
	I1205 06:23:36.371306   48520 start.go:83] releasing machines lock for "functional-101526", held for 1.37234313s
	I1205 06:23:36.371420   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:36.390415   48520 ssh_runner.go:195] Run: cat /version.json
	I1205 06:23:36.390468   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.391053   48520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:23:36.391113   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.419642   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.424516   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.520794   48520 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:23:36.520923   48520 ssh_runner.go:195] Run: systemctl --version
	I1205 06:23:36.606649   48520 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:23:36.609416   48520 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:23:36.609453   48520 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:23:36.609534   48520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:23:36.613918   48520 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:23:36.613964   48520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:23:36.614023   48520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:23:36.621686   48520 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:23:36.621710   48520 start.go:496] detecting cgroup driver to use...
	I1205 06:23:36.621769   48520 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:23:36.621841   48520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:23:36.637331   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:23:36.650267   48520 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:23:36.650327   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:23:36.665934   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:23:36.679279   48520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:23:36.785775   48520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:23:36.894469   48520 docker.go:234] disabling docker service ...
	I1205 06:23:36.894545   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:23:36.910313   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:23:36.923239   48520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:23:37.033287   48520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:23:37.168163   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:23:37.180578   48520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:23:37.193942   48520 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:23:37.194023   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:23:37.202471   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:23:37.211003   48520 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:23:37.211119   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:23:37.219839   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.228562   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:23:37.237276   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.245970   48520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:23:37.253895   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:23:37.262450   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:23:37.271505   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:23:37.280464   48520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:23:37.287174   48520 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:23:37.288154   48520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:23:37.295694   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.408389   48520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:23:37.517122   48520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:23:37.517255   48520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:23:37.521337   48520 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1205 06:23:37.521369   48520 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:23:37.521389   48520 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1205 06:23:37.521397   48520 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:37.521404   48520 command_runner.go:130] > Access: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521409   48520 command_runner.go:130] > Modify: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521418   48520 command_runner.go:130] > Change: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521422   48520 command_runner.go:130] >  Birth: -
	I1205 06:23:37.521666   48520 start.go:564] Will wait 60s for crictl version
	I1205 06:23:37.521723   48520 ssh_runner.go:195] Run: which crictl
	I1205 06:23:37.524716   48520 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:23:37.525219   48520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:23:37.548325   48520 command_runner.go:130] > Version:  0.1.0
	I1205 06:23:37.548510   48520 command_runner.go:130] > RuntimeName:  containerd
	I1205 06:23:37.548666   48520 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1205 06:23:37.548827   48520 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:23:37.551185   48520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:23:37.551250   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.571456   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.573276   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.591907   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.597675   48520 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:23:37.598882   48520 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:23:37.617416   48520 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:23:37.621349   48520 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:23:37.621511   48520 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:23:37.621626   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:37.621687   48520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:23:37.643465   48520 command_runner.go:130] > {
	I1205 06:23:37.643493   48520 command_runner.go:130] >   "images":  [
	I1205 06:23:37.643498   48520 command_runner.go:130] >     {
	I1205 06:23:37.643515   48520 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:23:37.643522   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643527   48520 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:23:37.643531   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643535   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643540   48520 command_runner.go:130] >       "size":  "8032639",
	I1205 06:23:37.643545   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643549   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643552   48520 command_runner.go:130] >     },
	I1205 06:23:37.643566   48520 command_runner.go:130] >     {
	I1205 06:23:37.643574   48520 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:23:37.643578   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643583   48520 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:23:37.643586   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643591   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643597   48520 command_runner.go:130] >       "size":  "21166088",
	I1205 06:23:37.643601   48520 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:23:37.643605   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643608   48520 command_runner.go:130] >     },
	I1205 06:23:37.643611   48520 command_runner.go:130] >     {
	I1205 06:23:37.643618   48520 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:23:37.643622   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643627   48520 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:23:37.643630   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643634   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643638   48520 command_runner.go:130] >       "size":  "21134420",
	I1205 06:23:37.643642   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643645   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643648   48520 command_runner.go:130] >       },
	I1205 06:23:37.643652   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643656   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643660   48520 command_runner.go:130] >     },
	I1205 06:23:37.643663   48520 command_runner.go:130] >     {
	I1205 06:23:37.643670   48520 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:23:37.643674   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643687   48520 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:23:37.643693   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643698   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643703   48520 command_runner.go:130] >       "size":  "24676285",
	I1205 06:23:37.643707   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643715   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643719   48520 command_runner.go:130] >       },
	I1205 06:23:37.643727   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643734   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643737   48520 command_runner.go:130] >     },
	I1205 06:23:37.643740   48520 command_runner.go:130] >     {
	I1205 06:23:37.643747   48520 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:23:37.643750   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643756   48520 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:23:37.643759   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643763   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643767   48520 command_runner.go:130] >       "size":  "20658969",
	I1205 06:23:37.643771   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643783   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643790   48520 command_runner.go:130] >       },
	I1205 06:23:37.643794   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643798   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643800   48520 command_runner.go:130] >     },
	I1205 06:23:37.643804   48520 command_runner.go:130] >     {
	I1205 06:23:37.643811   48520 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:23:37.643817   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643822   48520 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:23:37.643826   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643830   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643835   48520 command_runner.go:130] >       "size":  "22428165",
	I1205 06:23:37.643840   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643844   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643853   48520 command_runner.go:130] >     },
	I1205 06:23:37.643856   48520 command_runner.go:130] >     {
	I1205 06:23:37.643863   48520 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:23:37.643867   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643873   48520 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:23:37.643878   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643887   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643893   48520 command_runner.go:130] >       "size":  "15389290",
	I1205 06:23:37.643900   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643905   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643908   48520 command_runner.go:130] >       },
	I1205 06:23:37.643911   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643915   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643918   48520 command_runner.go:130] >     },
	I1205 06:23:37.643921   48520 command_runner.go:130] >     {
	I1205 06:23:37.644021   48520 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:23:37.644028   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.644033   48520 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:23:37.644036   48520 command_runner.go:130] >       ],
	I1205 06:23:37.644041   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.644045   48520 command_runner.go:130] >       "size":  "265458",
	I1205 06:23:37.644049   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.644056   48520 command_runner.go:130] >         "value":  "65535"
	I1205 06:23:37.644060   48520 command_runner.go:130] >       },
	I1205 06:23:37.644064   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.644075   48520 command_runner.go:130] >       "pinned":  true
	I1205 06:23:37.644078   48520 command_runner.go:130] >     }
	I1205 06:23:37.644081   48520 command_runner.go:130] >   ]
	I1205 06:23:37.644084   48520 command_runner.go:130] > }
	I1205 06:23:37.646462   48520 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:23:37.646482   48520 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:23:37.646489   48520 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:23:37.646588   48520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:23:37.646657   48520 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:23:37.674707   48520 command_runner.go:130] > {
	I1205 06:23:37.674726   48520 command_runner.go:130] >   "cniconfig": {
	I1205 06:23:37.674732   48520 command_runner.go:130] >     "Networks": [
	I1205 06:23:37.674735   48520 command_runner.go:130] >       {
	I1205 06:23:37.674741   48520 command_runner.go:130] >         "Config": {
	I1205 06:23:37.674745   48520 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1205 06:23:37.674752   48520 command_runner.go:130] >           "Name": "cni-loopback",
	I1205 06:23:37.674757   48520 command_runner.go:130] >           "Plugins": [
	I1205 06:23:37.674761   48520 command_runner.go:130] >             {
	I1205 06:23:37.674765   48520 command_runner.go:130] >               "Network": {
	I1205 06:23:37.674769   48520 command_runner.go:130] >                 "ipam": {},
	I1205 06:23:37.674775   48520 command_runner.go:130] >                 "type": "loopback"
	I1205 06:23:37.674779   48520 command_runner.go:130] >               },
	I1205 06:23:37.674785   48520 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1205 06:23:37.674788   48520 command_runner.go:130] >             }
	I1205 06:23:37.674792   48520 command_runner.go:130] >           ],
	I1205 06:23:37.674802   48520 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1205 06:23:37.674806   48520 command_runner.go:130] >         },
	I1205 06:23:37.674813   48520 command_runner.go:130] >         "IFName": "lo"
	I1205 06:23:37.674816   48520 command_runner.go:130] >       }
	I1205 06:23:37.674820   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674825   48520 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1205 06:23:37.674829   48520 command_runner.go:130] >     "PluginDirs": [
	I1205 06:23:37.674832   48520 command_runner.go:130] >       "/opt/cni/bin"
	I1205 06:23:37.674836   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674840   48520 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1205 06:23:37.674844   48520 command_runner.go:130] >     "Prefix": "eth"
	I1205 06:23:37.674846   48520 command_runner.go:130] >   },
	I1205 06:23:37.674850   48520 command_runner.go:130] >   "config": {
	I1205 06:23:37.674854   48520 command_runner.go:130] >     "cdiSpecDirs": [
	I1205 06:23:37.674858   48520 command_runner.go:130] >       "/etc/cdi",
	I1205 06:23:37.674862   48520 command_runner.go:130] >       "/var/run/cdi"
	I1205 06:23:37.674871   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674875   48520 command_runner.go:130] >     "cni": {
	I1205 06:23:37.674879   48520 command_runner.go:130] >       "binDir": "",
	I1205 06:23:37.674883   48520 command_runner.go:130] >       "binDirs": [
	I1205 06:23:37.674888   48520 command_runner.go:130] >         "/opt/cni/bin"
	I1205 06:23:37.674891   48520 command_runner.go:130] >       ],
	I1205 06:23:37.674895   48520 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1205 06:23:37.674899   48520 command_runner.go:130] >       "confTemplate": "",
	I1205 06:23:37.674903   48520 command_runner.go:130] >       "ipPref": "",
	I1205 06:23:37.674907   48520 command_runner.go:130] >       "maxConfNum": 1,
	I1205 06:23:37.674911   48520 command_runner.go:130] >       "setupSerially": false,
	I1205 06:23:37.674916   48520 command_runner.go:130] >       "useInternalLoopback": false
	I1205 06:23:37.674919   48520 command_runner.go:130] >     },
	I1205 06:23:37.674927   48520 command_runner.go:130] >     "containerd": {
	I1205 06:23:37.674932   48520 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1205 06:23:37.674937   48520 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1205 06:23:37.674942   48520 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1205 06:23:37.674946   48520 command_runner.go:130] >       "runtimes": {
	I1205 06:23:37.674950   48520 command_runner.go:130] >         "runc": {
	I1205 06:23:37.674955   48520 command_runner.go:130] >           "ContainerAnnotations": null,
	I1205 06:23:37.674959   48520 command_runner.go:130] >           "PodAnnotations": null,
	I1205 06:23:37.674965   48520 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1205 06:23:37.674969   48520 command_runner.go:130] >           "cgroupWritable": false,
	I1205 06:23:37.674974   48520 command_runner.go:130] >           "cniConfDir": "",
	I1205 06:23:37.674978   48520 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1205 06:23:37.674982   48520 command_runner.go:130] >           "io_type": "",
	I1205 06:23:37.674986   48520 command_runner.go:130] >           "options": {
	I1205 06:23:37.674990   48520 command_runner.go:130] >             "BinaryName": "",
	I1205 06:23:37.674994   48520 command_runner.go:130] >             "CriuImagePath": "",
	I1205 06:23:37.674998   48520 command_runner.go:130] >             "CriuWorkPath": "",
	I1205 06:23:37.675002   48520 command_runner.go:130] >             "IoGid": 0,
	I1205 06:23:37.675006   48520 command_runner.go:130] >             "IoUid": 0,
	I1205 06:23:37.675011   48520 command_runner.go:130] >             "NoNewKeyring": false,
	I1205 06:23:37.675018   48520 command_runner.go:130] >             "Root": "",
	I1205 06:23:37.675022   48520 command_runner.go:130] >             "ShimCgroup": "",
	I1205 06:23:37.675026   48520 command_runner.go:130] >             "SystemdCgroup": false
	I1205 06:23:37.675030   48520 command_runner.go:130] >           },
	I1205 06:23:37.675035   48520 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1205 06:23:37.675042   48520 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1205 06:23:37.675046   48520 command_runner.go:130] >           "runtimePath": "",
	I1205 06:23:37.675051   48520 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1205 06:23:37.675055   48520 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1205 06:23:37.675059   48520 command_runner.go:130] >           "snapshotter": ""
	I1205 06:23:37.675062   48520 command_runner.go:130] >         }
	I1205 06:23:37.675065   48520 command_runner.go:130] >       }
	I1205 06:23:37.675068   48520 command_runner.go:130] >     },
	I1205 06:23:37.675077   48520 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1205 06:23:37.675082   48520 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1205 06:23:37.675087   48520 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1205 06:23:37.675091   48520 command_runner.go:130] >     "disableApparmor": false,
	I1205 06:23:37.675096   48520 command_runner.go:130] >     "disableHugetlbController": true,
	I1205 06:23:37.675100   48520 command_runner.go:130] >     "disableProcMount": false,
	I1205 06:23:37.675104   48520 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1205 06:23:37.675108   48520 command_runner.go:130] >     "enableCDI": true,
	I1205 06:23:37.675112   48520 command_runner.go:130] >     "enableSelinux": false,
	I1205 06:23:37.675117   48520 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1205 06:23:37.675121   48520 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1205 06:23:37.675126   48520 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1205 06:23:37.675131   48520 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1205 06:23:37.675135   48520 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1205 06:23:37.675139   48520 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1205 06:23:37.675144   48520 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1205 06:23:37.675150   48520 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675154   48520 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1205 06:23:37.675159   48520 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675164   48520 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1205 06:23:37.675172   48520 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1205 06:23:37.675176   48520 command_runner.go:130] >   },
	I1205 06:23:37.675179   48520 command_runner.go:130] >   "features": {
	I1205 06:23:37.675184   48520 command_runner.go:130] >     "supplemental_groups_policy": true
	I1205 06:23:37.675187   48520 command_runner.go:130] >   },
	I1205 06:23:37.675190   48520 command_runner.go:130] >   "golang": "go1.24.9",
	I1205 06:23:37.675201   48520 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675211   48520 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675215   48520 command_runner.go:130] >   "runtimeHandlers": [
	I1205 06:23:37.675218   48520 command_runner.go:130] >     {
	I1205 06:23:37.675222   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675227   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675231   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675234   48520 command_runner.go:130] >       }
	I1205 06:23:37.675237   48520 command_runner.go:130] >     },
	I1205 06:23:37.675240   48520 command_runner.go:130] >     {
	I1205 06:23:37.675244   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675249   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675253   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675257   48520 command_runner.go:130] >       },
	I1205 06:23:37.675261   48520 command_runner.go:130] >       "name": "runc"
	I1205 06:23:37.675264   48520 command_runner.go:130] >     }
	I1205 06:23:37.675267   48520 command_runner.go:130] >   ],
	I1205 06:23:37.675270   48520 command_runner.go:130] >   "status": {
	I1205 06:23:37.675273   48520 command_runner.go:130] >     "conditions": [
	I1205 06:23:37.675277   48520 command_runner.go:130] >       {
	I1205 06:23:37.675280   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675284   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675288   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675292   48520 command_runner.go:130] >         "type": "RuntimeReady"
	I1205 06:23:37.675295   48520 command_runner.go:130] >       },
	I1205 06:23:37.675298   48520 command_runner.go:130] >       {
	I1205 06:23:37.675304   48520 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1205 06:23:37.675312   48520 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1205 06:23:37.675316   48520 command_runner.go:130] >         "status": false,
	I1205 06:23:37.675320   48520 command_runner.go:130] >         "type": "NetworkReady"
	I1205 06:23:37.675323   48520 command_runner.go:130] >       },
	I1205 06:23:37.675326   48520 command_runner.go:130] >       {
	I1205 06:23:37.675330   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675334   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675338   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675343   48520 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1205 06:23:37.675347   48520 command_runner.go:130] >       }
	I1205 06:23:37.675350   48520 command_runner.go:130] >     ]
	I1205 06:23:37.675353   48520 command_runner.go:130] >   }
	I1205 06:23:37.675356   48520 command_runner.go:130] > }
	I1205 06:23:37.675685   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:37.675695   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:37.675709   48520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:23:37.675732   48520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:23:37.675850   48520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:23:37.675917   48520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:23:37.682806   48520 command_runner.go:130] > kubeadm
	I1205 06:23:37.682826   48520 command_runner.go:130] > kubectl
	I1205 06:23:37.682831   48520 command_runner.go:130] > kubelet
	I1205 06:23:37.683692   48520 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:23:37.683790   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:23:37.691316   48520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:23:37.703871   48520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:23:37.716284   48520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 06:23:37.728952   48520 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:23:37.732950   48520 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:23:37.733083   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.845498   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:37.867115   48520 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:23:37.867139   48520 certs.go:195] generating shared ca certs ...
	I1205 06:23:37.867158   48520 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:37.867407   48520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:23:37.867492   48520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:23:37.867536   48520 certs.go:257] generating profile certs ...
	I1205 06:23:37.867696   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:23:37.867788   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:23:37.867863   48520 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:23:37.867878   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:23:37.867909   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:23:37.867937   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:23:37.867957   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:23:37.867990   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:23:37.868021   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:23:37.868041   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:23:37.868082   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:23:37.868158   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:23:37.868216   48520 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:23:37.868231   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:23:37.868276   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:23:37.868325   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:23:37.868373   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:23:37.868453   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:37.868510   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:37.868541   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem -> /usr/share/ca-certificates/4192.pem
	I1205 06:23:37.868568   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /usr/share/ca-certificates/41922.pem
	I1205 06:23:37.869214   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:23:37.888705   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:23:37.907292   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:23:37.928487   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:23:37.946435   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:23:37.964299   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:23:37.982113   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:23:37.999555   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:23:38.025054   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:23:38.044579   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:23:38.064934   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:23:38.085119   48520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:23:38.098666   48520 ssh_runner.go:195] Run: openssl version
	I1205 06:23:38.104661   48520 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:23:38.105114   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.112530   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:23:38.119940   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123892   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123985   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.124059   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.164658   48520 command_runner.go:130] > 51391683
	I1205 06:23:38.165135   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:23:38.172385   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.179652   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:23:38.187250   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190908   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190946   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190996   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.231356   48520 command_runner.go:130] > 3ec20f2e
	I1205 06:23:38.231428   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:23:38.238676   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.245835   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:23:38.252946   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256642   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256892   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256951   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.296975   48520 command_runner.go:130] > b5213941
	I1205 06:23:38.297434   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:23:38.304845   48520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308564   48520 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308587   48520 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:23:38.308594   48520 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1205 06:23:38.308601   48520 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:38.308607   48520 command_runner.go:130] > Access: 2025-12-05 06:19:31.018816392 +0000
	I1205 06:23:38.308612   48520 command_runner.go:130] > Modify: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308618   48520 command_runner.go:130] > Change: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308623   48520 command_runner.go:130] >  Birth: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308692   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:23:38.348984   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.349475   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:23:38.394714   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.395243   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:23:38.435818   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.436261   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:23:38.476805   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.477267   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:23:38.518071   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.518611   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:23:38.561014   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.561491   48520 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:38.561574   48520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:23:38.561660   48520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:23:38.588277   48520 cri.go:89] found id: ""
	I1205 06:23:38.588366   48520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:23:38.596406   48520 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:23:38.596430   48520 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:23:38.596438   48520 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:23:38.597543   48520 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:23:38.597605   48520 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:23:38.597685   48520 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:23:38.607655   48520 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:23:38.608093   48520 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.608241   48520 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "functional-101526" cluster setting kubeconfig missing "functional-101526" context setting]
	I1205 06:23:38.608622   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.609091   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.609324   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.609886   48520 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:23:38.610063   48520 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:23:38.610057   48520 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:23:38.610120   48520 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:23:38.610139   48520 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:23:38.610175   48520 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:23:38.610495   48520 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:23:38.619299   48520 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:23:38.619367   48520 kubeadm.go:602] duration metric: took 21.74243ms to restartPrimaryControlPlane
	I1205 06:23:38.619392   48520 kubeadm.go:403] duration metric: took 57.910865ms to StartCluster
	I1205 06:23:38.619420   48520 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.619502   48520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.620189   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.620458   48520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 06:23:38.620608   48520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:23:38.620940   48520 addons.go:70] Setting storage-provisioner=true in profile "functional-101526"
	I1205 06:23:38.621064   48520 addons.go:239] Setting addon storage-provisioner=true in "functional-101526"
	I1205 06:23:38.621113   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.620703   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:38.621254   48520 addons.go:70] Setting default-storageclass=true in profile "functional-101526"
	I1205 06:23:38.621267   48520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-101526"
	I1205 06:23:38.621543   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.621837   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.622827   48520 out.go:179] * Verifying Kubernetes components...
	I1205 06:23:38.624023   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:38.667927   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.668094   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.668372   48520 addons.go:239] Setting addon default-storageclass=true in "functional-101526"
	I1205 06:23:38.668400   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.668811   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.682967   48520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:23:38.684152   48520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.684170   48520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:23:38.684236   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.712186   48520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:38.712208   48520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:23:38.712271   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.728758   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.759681   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.830869   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:38.880502   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.894150   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.597389   48520 node_ready.go:35] waiting up to 6m0s for node "functional-101526" to be "Ready" ...
	I1205 06:23:39.597462   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597505   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597540   48520 retry.go:31] will retry after 347.041569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597590   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597614   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597624   48520 retry.go:31] will retry after 291.359395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:23:39.597730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:39.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:39.889264   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.945727   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:39.950448   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.950487   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.950523   48520 retry.go:31] will retry after 542.352885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018611   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.018720   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018748   48520 retry.go:31] will retry after 498.666832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.098033   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.098325   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.493962   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:40.518418   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:40.562108   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.562226   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.562260   48520 retry.go:31] will retry after 406.138698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588025   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.588062   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588081   48520 retry.go:31] will retry after 594.532888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.598248   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.598327   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.598636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.969306   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.034172   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.037396   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.037482   48520 retry.go:31] will retry after 875.411269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.098568   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.098689   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.098986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:41.183391   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:41.246665   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.246713   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.246732   48520 retry.go:31] will retry after 928.241992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.598231   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.598321   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:41.598695   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:41.913216   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.971936   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.975346   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.975382   48520 retry.go:31] will retry after 1.177811903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:42.175570   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:42.247042   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:42.247165   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.247197   48520 retry.go:31] will retry after 1.26909991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.598419   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.598544   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.598893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.097717   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.098051   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.154349   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:43.214165   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.217853   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.217885   48520 retry.go:31] will retry after 2.752289429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.517328   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:43.580346   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.580405   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.580434   48520 retry.go:31] will retry after 2.299289211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.598503   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.598628   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.598995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:43.599083   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:44.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.098502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.098803   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:44.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.597856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.097813   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.097918   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.098342   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.597661   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.880606   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:45.938914   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:45.938948   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.938966   48520 retry.go:31] will retry after 2.215203034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.971116   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:46.035840   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:46.035877   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.035895   48520 retry.go:31] will retry after 2.493998942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.098074   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.098239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.098559   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:46.098611   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:46.598405   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.598501   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.598815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.098358   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.098432   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.098766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.598407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.598667   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:48.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.098899   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:48.098950   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:48.155209   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:48.214464   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.214512   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.214531   48520 retry.go:31] will retry after 5.617095307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.530967   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:48.587770   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.587811   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.587831   48520 retry.go:31] will retry after 3.714896929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.598174   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.598240   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.598490   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.098439   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.098511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.597635   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.597708   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.097641   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.098020   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.598128   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:50.598177   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:51.097653   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.097726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:51.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.598434   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.598708   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.098476   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.098552   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.098854   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.303312   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:52.364380   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:52.367543   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.367573   48520 retry.go:31] will retry after 3.56011918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.597990   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.598059   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.598330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:52.598370   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:53.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.097720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.097995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.598131   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.832691   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:53.932471   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:53.935567   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:53.935601   48520 retry.go:31] will retry after 7.968340753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:54.098032   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.098119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.098504   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:54.598332   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.598408   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.598700   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:54.598750   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:55.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.098636   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.598452   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.598735   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.928461   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:55.985797   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:55.985849   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:55.985868   48520 retry.go:31] will retry after 13.95380646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:56.098043   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.098142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:56.598257   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.598332   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.598591   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:57.098338   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.098418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:57.098806   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:57.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.598727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.098565   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.098653   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.098993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.597995   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.598071   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.598388   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:59.598441   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:00.097798   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.097895   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.098216   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:00.598109   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.598187   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.598469   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.098232   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.098656   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.598378   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.598756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:01.598798   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:01.904244   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:01.963282   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:01.966528   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:01.966559   48520 retry.go:31] will retry after 12.949527151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:02.097647   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.098069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:02.597723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.597819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.598178   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.097745   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.098222   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.597893   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.597959   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.598249   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:04.097760   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.098267   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:04.098317   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:04.598025   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.598124   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.598425   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.098484   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.098557   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.098824   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.598589   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.598684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.599025   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.098166   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.597592   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.597662   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.597933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:06.597973   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:07.098457   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.098530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.098893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:07.598367   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.598458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.598757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.098344   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.098429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.098757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.598492   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.598559   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.598841   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:08.598881   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:09.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.097973   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.098345   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.598107   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.598174   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.598441   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.939938   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:09.995364   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:09.998554   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:09.998588   48520 retry.go:31] will retry after 16.114489594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:10.097931   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.098044   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.098385   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:10.598110   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.598191   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.598513   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:11.098275   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.098615   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:11.098670   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:11.598400   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.598740   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.098540   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.098616   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.097746   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.097819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.098163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.597759   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.597834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.598122   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:13.598175   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:14.097627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.097709   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.597953   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.598020   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.598324   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.916824   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:14.975576   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:14.975628   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:14.975646   48520 retry.go:31] will retry after 12.242306889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:15.097909   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.098005   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.098359   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:15.597934   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.598277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:15.598320   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:16.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:16.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.597791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.598100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.098010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.597774   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.597845   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.598218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:18.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:18.098183   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:18.598335   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.598405   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.598680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.098583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.098655   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.597882   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.597965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.598257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:20.097767   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.097837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.098151   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:20.098210   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:20.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.597821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.598163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.097868   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.097944   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.597748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:22.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.097863   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:22.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:22.597927   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.598018   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.097757   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.097834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.098165   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.598412   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:24.598451   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:25.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.097818   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.098201   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:25.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.598206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.097703   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.114242   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:26.182245   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:26.182291   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.182309   48520 retry.go:31] will retry after 20.133806896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.597729   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.597815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:27.097723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:27.098168   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:27.218635   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:27.278311   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:27.278351   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.278369   48520 retry.go:31] will retry after 29.943294063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.597675   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.597766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.598047   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.097690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.098089   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.597760   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.598077   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.597938   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.598028   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.598339   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:29.598384   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:30.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:30.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.097803   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.098330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.597811   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.598159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:32.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:32.098247   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:32.598587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.598658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.097615   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.097683   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.098041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.598348   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.598685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:34.098505   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.098598   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.098917   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:34.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:34.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.598097   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.098294   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.598401   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.598478   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.598810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:36.098627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.098700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.099015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:36.099064   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:36.597658   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.598106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.098117   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.598093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.098206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.597747   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:38.598117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:39.097836   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.097928   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.098334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:39.598071   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.598143   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.598413   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.098336   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.098679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.598808   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:40.598849   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:41.098353   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.098417   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.098669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:41.598525   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.598609   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.597659   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:43.097673   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.098074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:43.098136   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:43.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.597761   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.098370   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.098629   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.598627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.598699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.599010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:45.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.097907   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:45.098408   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:45.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.597740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.098586   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.098659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.098977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.316378   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:46.382136   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:46.385605   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.385642   48520 retry.go:31] will retry after 25.45198813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.598118   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.598219   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.598522   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:47.098288   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.098354   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.098627   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:47.098682   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:47.598404   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.598746   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.098648   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.099013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.598372   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.598439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.598709   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:49.098599   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.099061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:49.099113   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.598014   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.598306   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.097691   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.598564   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.598829   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.097583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.097659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.098037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.598325   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.598399   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:51.598761   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:52.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.098621   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.098978   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.597773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.097590   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.097657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.097905   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.597594   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.597666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.597973   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:54.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:54.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:54.597977   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.598054   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.598305   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.097821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.598396   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.598475   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:56.098321   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.098407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.098685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:56.098727   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:56.598502   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.598876   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.097587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.097675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.097966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.222289   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:57.284849   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:57.284890   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.284910   48520 retry.go:31] will retry after 41.469992375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.598343   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.598669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:58.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.098574   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.098880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:58.098930   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:58.597606   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.597675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.098608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.098916   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.597662   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.097620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.097697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.598474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:00.598791   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:01.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.099039   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:01.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.597775   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.598053   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.098050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:03.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.097804   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.098169   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:03.098231   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:03.597623   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.597691   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.097739   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.098119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.597929   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.598003   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:05.098361   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.098426   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:05.098730   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:05.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.598783   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.098625   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.098705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.099060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.598425   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.598694   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:07.098518   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:07.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:07.597640   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.598023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.097648   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.098028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.597762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.097853   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.598150   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.598411   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:09.598454   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:10.097719   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:10.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.598121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.097959   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.838548   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:25:11.913959   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914006   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914113   48520 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:12.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.098446   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.098756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:12.098805   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:12.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.598398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.598661   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.098442   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.098525   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.598638   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.599017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.097669   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.098009   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.597973   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.598377   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:14.598425   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:15.098092   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.098173   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.098548   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:15.598315   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.598383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.598676   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.098414   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.098500   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.098815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.598530   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.598606   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.598956   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:16.599009   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:17.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.097731   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:17.597681   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.097848   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.098264   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.597980   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.598079   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.598336   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:19.098419   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.098509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:19.098915   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:19.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.097977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.597733   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.097758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.098096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.598679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:21.598737   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:22.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.098935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:22.597673   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.098687   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:23.598843   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:24.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.098699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.099069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:24.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.597963   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.598230   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.097788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.098109   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.597831   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.597926   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:26.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.097972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:26.098033   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:26.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.097823   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.097896   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.597972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:28.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.098036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:28.098084   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:28.597722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.598154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.097749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.098021   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.597987   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.598315   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:30.098008   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.098085   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.098479   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:30.098542   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:30.598023   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.598099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.598365   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.097739   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.098082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.597660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.598050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.097729   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.097985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.597714   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:32.598157   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:33.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:33.597803   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.597872   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.598133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.098121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.598211   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.598290   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.598585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:34.598631   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:35.098390   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.098471   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:35.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.598657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.598992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.097793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.598358   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.598693   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:36.598731   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:37.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.098568   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.098894   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:37.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.598057   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.599817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:25:38.097679   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:38.598262   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.598388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:38.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:38.755357   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:25:38.811504   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811556   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811634   48520 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:38.813895   48520 out.go:179] * Enabled addons: 
	I1205 06:25:38.815272   48520 addons.go:530] duration metric: took 2m0.19467206s for enable addons: enabled=[]
	I1205 06:25:39.097850   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.097947   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.098277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.098242   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.098311   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.098643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.598717   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:41.098378   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.098451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:41.098817   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:41.598539   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.598608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.598921   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.097686   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.597727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.598041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.097789   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.097885   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.098205   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.597913   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.597988   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:43.598385   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:44.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.097735   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.098040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:44.598010   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.598096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.098016   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.098099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.098496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.597755   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.597830   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.598148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:46.097840   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.097939   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.098311   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:46.098366   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:46.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.598111   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.598421   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.098155   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.098226   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.098489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.598715   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:48.098525   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:48.099014   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:48.598319   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.598387   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.598646   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.098618   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.098694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.099074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.597928   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.598344   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.098007   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.098092   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.098397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.598202   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.598496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:50.598545   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:51.098285   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.098357   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:51.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.098404   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.098477   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.098809   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.598593   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.598670   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.598948   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:52.598996   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:53.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.097712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:53.597701   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.597798   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.598156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.097890   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.097965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.098294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.598153   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.598231   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.598502   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:55.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.098413   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.098774   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:55.098829   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:55.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.598649   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.598924   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.098373   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.098641   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.598457   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.598530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:57.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.098633   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:57.098974   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:57.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.598416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.598776   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.098490   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.098937   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.598427   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.598511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.598848   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.597568   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.597645   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.597976   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:59.598030   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:00.098475   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.098940   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:00.598316   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.598385   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.598643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.098402   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.098479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.098749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:01.598947   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:02.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.098398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.098727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:02.598537   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.598620   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.598964   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.098104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.598364   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.598437   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.598722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:04.098558   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.098639   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.099052   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:04.099124   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:04.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.597957   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.097925   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.097993   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.597982   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.598052   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.598387   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.098263   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.598455   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.598714   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:06.598754   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:07.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.098585   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.098898   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:07.597622   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.097980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.597757   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.598102   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:09.097870   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.097951   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.098248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:09.098294   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:09.598112   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.598239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.598574   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.098411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.098493   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.597577   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.597650   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.597981   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:11.098430   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.098504   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:11.098808   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:11.598510   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.598593   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.598863   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.097604   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.097676   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.097998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.597698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.098093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.597735   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.597806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:13.598192   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:14.098341   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.098414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:14.597561   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.597634   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.597953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.097674   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.597858   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.597937   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:15.598252   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:16.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.098136   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:16.597837   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.598258   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.097730   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.097795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:18.097743   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.097833   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:18.098240   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:18.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.598040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.097951   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.098029   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.598489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:20.098213   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.098283   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.098535   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:20.098577   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:20.598411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.598481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.098642   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.098953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.598371   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.598445   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:22.098546   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.098626   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.098949   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:22.099002   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:22.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.597742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.598072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.098292   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.098363   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.098623   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.598300   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.598378   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.598681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.098570   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.098890   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.597874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.597942   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.598193   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:24.598235   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:25.097954   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.098049   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.098380   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:25.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.098335   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.098599   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:26.598819   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:27.098598   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.098666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.098997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:27.598342   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.598674   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.098464   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.098548   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.098911   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.598054   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:29.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:29.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:29.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.598512   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.098722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.598362   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.598429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.598778   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:31.098515   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.098594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.098941   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:31.099003   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:31.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.598069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.097668   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.097740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.597672   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.598073   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.098170   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.598322   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.598390   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:33.598681   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:34.098437   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.098514   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.098910   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:34.597764   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.598152   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.097815   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.097898   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.598058   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:36.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.098272   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:36.098329   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:36.597583   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.597647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.597901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.097624   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.597700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.097738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.597805   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.598144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:38.598199   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:39.097874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.097953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.098299   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.598120   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.598381   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.598235   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:40.598299   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:41.097613   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.097684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.097934   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:41.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.598095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.097741   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.098252   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.597939   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.598259   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:43.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.098098   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:43.098152   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:43.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.597750   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.097834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.598119   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.598510   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:45.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:45.098935   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:45.598338   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.598404   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.598666   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.098497   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.098980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.597691   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.098061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.598104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:47.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:48.097827   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:48.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.597728   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.597996   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.597943   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.598016   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.598353   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:49.598407   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:50.097817   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.097883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:50.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.597641   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.597986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:52.097689   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.098130   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:52.098200   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.598147   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.097621   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.097992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:54.097842   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.097924   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:54.098348   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:54.598061   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.598132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.097700   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.598059   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:56.098320   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.098388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.098645   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:56.098686   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:56.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.598594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.598880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.097600   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.097674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.097997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:58.098426   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.098498   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.098810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:58.098866   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:58.597573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.597644   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.597980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.098351   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.098416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.098680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.598057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:00.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.099364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:27:00.099443   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:00.598194   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.598268   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.598536   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.098258   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.098330   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.598444   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.598519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.098423   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.098519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.098885   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.597608   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.597679   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.598006   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:02.598063   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:03.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:03.597631   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.598018   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.098154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:04.598512   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:05.098189   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.098594   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:05.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.598766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.098531   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.098612   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.598739   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:06.598794   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:07.098498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.098896   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:07.598506   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.598576   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.598842   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.098375   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.098481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.098796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.598436   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.598506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.598870   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:08.598917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:09.098638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.098713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.099031   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:09.598009   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.598075   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.098104   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.098192   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.098576   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.598351   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.598430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.598734   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:11.098387   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.098458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.098711   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:11.098748   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:11.598498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.598845   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.097611   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.098027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.597725   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.597777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.598079   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:13.598137   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:14.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:14.598045   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.598450   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.098264   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.098347   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.598409   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.598702   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:15.598770   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:16.098516   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.098587   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.098908   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:16.598224   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.598302   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.598621   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.098312   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.098380   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.598428   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.598505   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.598807   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:17.598861   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:18.098642   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.098716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.099040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:18.597649   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.597719   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.598036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.097919   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.097999   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.098304   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.598170   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:20.098308   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:20.098698   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:20.598481   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.598549   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.097999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.597636   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.097711   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.098159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.597596   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.597665   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.597997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:22.598069   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:23.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.097723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.098062   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.598043   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.097740   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.097814   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.598060   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.598131   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:24.598468   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:25.098259   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.098337   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.098681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:25.598479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.598553   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.598817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.098403   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.598420   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.598491   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:26.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:27.098591   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.098669   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:27.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.597699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.097670   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.098120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.597838   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.597913   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.598248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:29.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.097724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.097974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:29.098015   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:29.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.598034   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.097797   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.098177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.597863   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.597934   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.598220   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:31.097682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.098095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:31.098146   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:31.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.597758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.598081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.097685   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.597761   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:33.097892   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.097972   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.098291   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:33.098349   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:33.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.598028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.598024   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.598102   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:35.098221   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.098585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:35.098636   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:35.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.598796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.098479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.098560   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.098901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.598360   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.598431   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:37.098449   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:37.098917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:37.598548   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.598615   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.598889   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.100439   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.100533   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.100821   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.598323   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.598665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:39.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.098663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.099057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:39.099121   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:39.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.098100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.597799   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.597871   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.598186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.097664   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.097730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.097993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.597682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:41.598120   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:42.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.098183   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:42.597650   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.098084   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.598402   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.598480   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.598777   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:43.598825   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:44.098381   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.098721   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:44.597605   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.597678   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.097828   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.098242   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.597563   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.597635   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.597935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:46.097599   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.097672   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.097994   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:46.098054   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:46.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.098397   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.098474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.098743   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.598385   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.598785   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:48.098501   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.098580   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.098912   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:48.098971   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:48.598554   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.598624   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.598891   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.097780   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.098243   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.598130   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.598205   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.098061   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.098132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.098478   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.598272   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.598348   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:50.598692   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:51.098399   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.098484   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.098838   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:51.598356   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.598698   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.098686   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.098782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.099141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.597853   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.597931   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.598221   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:53.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.098015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:53.597727   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.597807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.097889   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.097964   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.598052   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.598384   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:55.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.098128   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.098471   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:55.098525   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:55.598047   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.598443   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.098223   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.098308   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.098582   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.598347   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.598418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.598724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:57.098522   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.098600   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.098946   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:57.099013   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:57.597645   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.597724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.598038   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.097753   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.098035   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.598043   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.598109   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.598433   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:59.598492   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:00.098337   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.098427   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.098788   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:00.598433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.098654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.098740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.099090   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:02.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.097737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.098064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:02.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:02.597688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.597667   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.597737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.597990   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.098055   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.597984   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.598390   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:04.598444   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:05.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:05.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.597765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:07.097703   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:07.098142   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:07.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.597911   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.598223   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.098023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.597760   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.598171   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:09.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.098013   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.098328   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:09.098389   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:09.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.598116   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.598364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.097762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.098113   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.597798   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.597870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.598188   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.097802   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.597864   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.598173   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:11.598226   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:12.097903   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.097983   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.098374   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:12.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.098032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:14.097811   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.097887   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.098156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:14.098195   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:14.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.598112   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.598466   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.097733   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.098140   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.597813   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.597884   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.598142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.097796   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.598101   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:16.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:17.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.097876   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.098194   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:17.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.597756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.098215   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.597886   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.597953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:18.598246   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:19.098191   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.098596   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:19.598092   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.598164   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.598453   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.098276   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.098366   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.098647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.598966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:20.599023   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:21.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.097783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:21.597816   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.597890   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.598175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.097699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.097778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.597810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.597883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:23.097913   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.097992   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.098257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:23.098301   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:23.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.097782   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.598076   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.598142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.598394   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:25.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.098130   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.098501   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:25.098557   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:25.598048   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.598119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.598461   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.098278   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.098345   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.098636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.598407   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.598479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:27.098588   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.098668   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.099022   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:27.099091   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:27.598346   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.598675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.098428   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.098506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.098818   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.598580   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.598652   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.598974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.097745   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.598001   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.598100   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.598428   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:29.598481   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:30.098006   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.098087   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:30.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.598160   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.097774   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.098181   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.597850   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.597930   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.598261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:32.097657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.097732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.098067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:32.098128   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:32.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.597865   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.598198   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.097897   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.097968   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.098282   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.597749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.597992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.597944   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.598021   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.598350   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:34.598404   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:35.097649   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:35.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.598762   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.098647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.098983   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.598412   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.598488   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.598831   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:36.598888   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:37.098658   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.098727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.099076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:37.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.598120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.097852   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.098158   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.597763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.598087   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:39.097696   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.097815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:39.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:39.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.598118   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.598367   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.097715   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.098133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.597711   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.597788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.598088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.097630   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.097700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:41.598088   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:42.097792   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.098293   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:42.597741   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.097766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:43.598827   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:44.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.098438   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:44.598631   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.598985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.097712   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.097807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.098219   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.597940   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.598275   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:46.097986   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.098060   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.098414   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:46.098473   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:46.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.598322   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.098433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.098851   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.598595   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.598663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.598967   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.097777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.098143   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.598005   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:48.598051   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:49.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.097752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.098085   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.598007   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.598332   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.097650   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.097722   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.098001   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.598180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:50.598236   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:51.097912   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.097985   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.098261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:51.597646   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.597720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.598030   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.097764   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:53.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.097704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:53.597719   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.597789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.098214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.597970   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.598039   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:55.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.098096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:55.098479   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:55.598243   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.598312   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.598632   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.098392   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.598446   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.598523   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.598834   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:57.098623   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.098697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.099008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:57.099062   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:57.597638   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.597977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.597925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.598287   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.098257   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.098326   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.098588   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.598617   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.598687   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.598989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:59.599048   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:00.097762   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.098260   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:00.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.098107   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.597703   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:02.097797   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.098139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:02.098179   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:02.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.597901   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.598200   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.097768   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.597806   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.597879   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.598126   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.597968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.598041   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.598369   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:04.598426   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:05.097837   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.097910   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.098172   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:05.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.597991   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.598317   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.597807   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.597874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.598213   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:07.097658   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.097727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:07.098053   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:07.597689   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.598111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.098144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.597825   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.597894   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.598214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:09.098293   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.098665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:09.098713   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:09.598091   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.598166   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.598438   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.098197   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.098285   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.598426   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.598502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.598789   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.098356   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.098424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.598534   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.598933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:11.598983   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:12.097671   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.097742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.098072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:12.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.598000   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.597778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:14.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.098625   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:14.098664   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:14.598601   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.598674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.598962   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.597712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.597989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:16.598176   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:17.097800   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.098186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:17.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.598071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:19.097692   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.098110   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:19.098171   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:19.597903   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.597976   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.598294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.097887   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.097966   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.098232   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.598138   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:21.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.097904   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.098238   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:21.098290   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:21.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.597989   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.598308   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.097687   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.098076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.597793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.097639   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.098012   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.598468   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.598537   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.598805   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:23.598850   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:24.098628   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.098711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.099042   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:24.598066   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.598140   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.598436   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.098234   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.098304   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.098635   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.598442   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.598521   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.598813   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:26.098310   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.098634   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:26.098672   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:26.598467   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.598836   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.098604   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.098674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.099002   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.597627   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.597979   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.097729   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.097806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.098118   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:28.598160   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:29.097982   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.098056   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:29.598171   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.598241   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.598550   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.098366   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.098794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.598344   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.598414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.598658   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:30.598696   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:31.098524   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.098930   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:31.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.598056   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.598096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:33.097781   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.097856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.098197   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:33.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:33.598550   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.598869   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.097577   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.097658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.097965   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.597894   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.597969   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:35.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.098046   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.098335   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:35.098382   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:35.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.597795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.598375   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.597753   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.597820   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.598067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.597846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.598191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:37.598248   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:38.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.097981   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.098280   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:38.597967   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.598047   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.598406   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.098350   48520 type.go:168] "Request Body" body=""
	I1205 06:29:39.098430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:39.098781   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.598588   48520 node_ready.go:38] duration metric: took 6m0.001106708s for node "functional-101526" to be "Ready" ...
	I1205 06:29:39.600415   48520 out.go:203] 
	W1205 06:29:39.601638   48520 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:29:39.601661   48520 out.go:285] * 
	* 
	W1205 06:29:39.603936   48520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:29:39.604891   48520 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-101526 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.995020192s for "functional-101526" cluster.
I1205 06:29:40.508006    4192 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (377.537176ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-226068 ssh sudo cat /etc/ssl/certs/41922.pem                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ ssh            │ functional-226068 ssh sudo cat /usr/share/ca-certificates/41922.pem                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ ssh            │ functional-226068 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image save kicbase/echo-server:functional-226068 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image rm kicbase/echo-server:functional-226068 --alsologtostderr                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image save --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format short --alsologtostderr                                                                                                     │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format yaml --alsologtostderr                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format json --alsologtostderr                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format table --alsologtostderr                                                                                                     │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ ssh            │ functional-226068 ssh pgrep buildkitd                                                                                                                           │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ image          │ functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr                                                          │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ delete         │ -p functional-226068                                                                                                                                            │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ start          │ -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ start          │ -p functional-101526 --alsologtostderr -v=8                                                                                                                     │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:23:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:23:34.555640   48520 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:23:34.555757   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.555768   48520 out.go:374] Setting ErrFile to fd 2...
	I1205 06:23:34.555773   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.556051   48520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:23:34.556413   48520 out.go:368] Setting JSON to false
	I1205 06:23:34.557238   48520 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3961,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:23:34.557311   48520 start.go:143] virtualization:  
	I1205 06:23:34.559039   48520 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:23:34.560249   48520 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:23:34.560305   48520 notify.go:221] Checking for updates...
	I1205 06:23:34.562854   48520 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:23:34.564039   48520 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:34.565137   48520 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:23:34.566333   48520 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:23:34.567598   48520 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:23:34.569245   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:34.569354   48520 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:23:34.590301   48520 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:23:34.590415   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.653386   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.643338894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.653494   48520 docker.go:319] overlay module found
	I1205 06:23:34.655010   48520 out.go:179] * Using the docker driver based on existing profile
	I1205 06:23:34.656153   48520 start.go:309] selected driver: docker
	I1205 06:23:34.656167   48520 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.656269   48520 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:23:34.656363   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.713521   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.704040472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.713916   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:34.713979   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:34.714025   48520 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.715459   48520 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:23:34.716546   48520 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:23:34.717743   48520 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:23:34.719027   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:34.719180   48520 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:23:34.738218   48520 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:23:34.738240   48520 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:23:34.779237   48520 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:23:34.998431   48520 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:23:34.998624   48520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:23:34.998714   48520 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998796   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:23:34.998805   48520 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.154µs
	I1205 06:23:34.998818   48520 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:23:34.998828   48520 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998857   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:23:34.998862   48520 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.504µs
	I1205 06:23:34.998868   48520 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998878   48520 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998890   48520 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:23:34.998904   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:23:34.998909   48520 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.361µs
	I1205 06:23:34.998916   48520 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998919   48520 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998925   48520 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998953   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:23:34.998955   48520 start.go:364] duration metric: took 23.967µs to acquireMachinesLock for "functional-101526"
	I1205 06:23:34.998958   48520 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.961µs
	I1205 06:23:34.998965   48520 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998968   48520 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:23:34.998973   48520 fix.go:54] fixHost starting: 
	I1205 06:23:34.998973   48520 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999001   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:23:34.999006   48520 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.323µs
	I1205 06:23:34.999012   48520 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:23:34.999020   48520 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999055   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:23:34.999060   48520 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.108µs
	I1205 06:23:34.999066   48520 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:23:34.999076   48520 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999117   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:23:34.999122   48520 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.426µs
	I1205 06:23:34.999127   48520 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:23:34.999135   48520 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999162   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:23:34.999167   48520 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.427µs
	I1205 06:23:34.999172   48520 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:23:34.999180   48520 cache.go:87] Successfully saved all images to host disk.
	I1205 06:23:34.999246   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:35.021908   48520 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:23:35.021948   48520 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:23:35.023534   48520 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:23:35.023573   48520 machine.go:94] provisionDockerMachine start ...
	I1205 06:23:35.023662   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.041007   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.041395   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.041419   48520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:23:35.188597   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.188620   48520 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:23:35.188686   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.205143   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.205585   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.205604   48520 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:23:35.361531   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.361628   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.381210   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.381606   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.381630   48520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:23:35.529415   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:23:35.529441   48520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:23:35.529467   48520 ubuntu.go:190] setting up certificates
	I1205 06:23:35.529477   48520 provision.go:84] configureAuth start
	I1205 06:23:35.529543   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:35.549800   48520 provision.go:143] copyHostCerts
	I1205 06:23:35.549840   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549879   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:23:35.549910   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549992   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:23:35.550081   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550102   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:23:35.550111   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550138   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:23:35.550192   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550212   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:23:35.550220   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550244   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:23:35.550303   48520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:23:35.896062   48520 provision.go:177] copyRemoteCerts
	I1205 06:23:35.896131   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:23:35.896172   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.915295   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.022077   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:23:36.022150   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:23:36.041535   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:23:36.041647   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:23:36.060235   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:23:36.060320   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:23:36.078423   48520 provision.go:87] duration metric: took 548.924199ms to configureAuth
	I1205 06:23:36.078451   48520 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:23:36.078638   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:36.078652   48520 machine.go:97] duration metric: took 1.055064213s to provisionDockerMachine
	I1205 06:23:36.078660   48520 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:23:36.078671   48520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:23:36.078720   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:23:36.078768   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.096049   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.200907   48520 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:23:36.204162   48520 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:23:36.204182   48520 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:23:36.204187   48520 command_runner.go:130] > VERSION_ID="12"
	I1205 06:23:36.204192   48520 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:23:36.204196   48520 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:23:36.204200   48520 command_runner.go:130] > ID=debian
	I1205 06:23:36.204205   48520 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:23:36.204210   48520 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:23:36.204232   48520 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:23:36.204297   48520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:23:36.204316   48520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:23:36.204326   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:23:36.204380   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:23:36.204473   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:23:36.204485   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /etc/ssl/certs/41922.pem
	I1205 06:23:36.204565   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:23:36.204573   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> /etc/test/nested/copy/4192/hosts
	I1205 06:23:36.204620   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:23:36.211988   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:36.229308   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:23:36.246073   48520 start.go:296] duration metric: took 167.399532ms for postStartSetup
	I1205 06:23:36.246163   48520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:23:36.246202   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.262461   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.366102   48520 command_runner.go:130] > 13%
	I1205 06:23:36.366647   48520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:23:36.370745   48520 command_runner.go:130] > 169G
	I1205 06:23:36.371285   48520 fix.go:56] duration metric: took 1.372308275s for fixHost
	I1205 06:23:36.371306   48520 start.go:83] releasing machines lock for "functional-101526", held for 1.37234313s
	I1205 06:23:36.371420   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:36.390415   48520 ssh_runner.go:195] Run: cat /version.json
	I1205 06:23:36.390468   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.391053   48520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:23:36.391113   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.419642   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.424516   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.520794   48520 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:23:36.520923   48520 ssh_runner.go:195] Run: systemctl --version
	I1205 06:23:36.606649   48520 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:23:36.609416   48520 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:23:36.609453   48520 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:23:36.609534   48520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:23:36.613918   48520 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:23:36.613964   48520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:23:36.614023   48520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:23:36.621686   48520 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:23:36.621710   48520 start.go:496] detecting cgroup driver to use...
	I1205 06:23:36.621769   48520 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:23:36.621841   48520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:23:36.637331   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:23:36.650267   48520 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:23:36.650327   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:23:36.665934   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:23:36.679279   48520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:23:36.785775   48520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:23:36.894469   48520 docker.go:234] disabling docker service ...
	I1205 06:23:36.894545   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:23:36.910313   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:23:36.923239   48520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:23:37.033287   48520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:23:37.168163   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:23:37.180578   48520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:23:37.193942   48520 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:23:37.194023   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:23:37.202471   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:23:37.211003   48520 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:23:37.211119   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:23:37.219839   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.228562   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:23:37.237276   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.245970   48520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:23:37.253895   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:23:37.262450   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:23:37.271505   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:23:37.280464   48520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:23:37.287174   48520 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:23:37.288154   48520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:23:37.295694   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.408389   48520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:23:37.517122   48520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:23:37.517255   48520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:23:37.521337   48520 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1205 06:23:37.521369   48520 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:23:37.521389   48520 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1205 06:23:37.521397   48520 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:37.521404   48520 command_runner.go:130] > Access: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521409   48520 command_runner.go:130] > Modify: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521418   48520 command_runner.go:130] > Change: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521422   48520 command_runner.go:130] >  Birth: -
	I1205 06:23:37.521666   48520 start.go:564] Will wait 60s for crictl version
	I1205 06:23:37.521723   48520 ssh_runner.go:195] Run: which crictl
	I1205 06:23:37.524716   48520 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:23:37.525219   48520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:23:37.548325   48520 command_runner.go:130] > Version:  0.1.0
	I1205 06:23:37.548510   48520 command_runner.go:130] > RuntimeName:  containerd
	I1205 06:23:37.548666   48520 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1205 06:23:37.548827   48520 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:23:37.551185   48520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:23:37.551250   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.571456   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.573276   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.591907   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.597675   48520 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:23:37.598882   48520 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:23:37.617416   48520 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:23:37.621349   48520 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:23:37.621511   48520 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:23:37.621626   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:37.621687   48520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:23:37.643465   48520 command_runner.go:130] > {
	I1205 06:23:37.643493   48520 command_runner.go:130] >   "images":  [
	I1205 06:23:37.643498   48520 command_runner.go:130] >     {
	I1205 06:23:37.643515   48520 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:23:37.643522   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643527   48520 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:23:37.643531   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643535   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643540   48520 command_runner.go:130] >       "size":  "8032639",
	I1205 06:23:37.643545   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643549   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643552   48520 command_runner.go:130] >     },
	I1205 06:23:37.643566   48520 command_runner.go:130] >     {
	I1205 06:23:37.643574   48520 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:23:37.643578   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643583   48520 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:23:37.643586   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643591   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643597   48520 command_runner.go:130] >       "size":  "21166088",
	I1205 06:23:37.643601   48520 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:23:37.643605   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643608   48520 command_runner.go:130] >     },
	I1205 06:23:37.643611   48520 command_runner.go:130] >     {
	I1205 06:23:37.643618   48520 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:23:37.643622   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643627   48520 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:23:37.643630   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643634   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643638   48520 command_runner.go:130] >       "size":  "21134420",
	I1205 06:23:37.643642   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643645   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643648   48520 command_runner.go:130] >       },
	I1205 06:23:37.643652   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643656   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643660   48520 command_runner.go:130] >     },
	I1205 06:23:37.643663   48520 command_runner.go:130] >     {
	I1205 06:23:37.643670   48520 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:23:37.643674   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643687   48520 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:23:37.643693   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643698   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643703   48520 command_runner.go:130] >       "size":  "24676285",
	I1205 06:23:37.643707   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643715   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643719   48520 command_runner.go:130] >       },
	I1205 06:23:37.643727   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643734   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643737   48520 command_runner.go:130] >     },
	I1205 06:23:37.643740   48520 command_runner.go:130] >     {
	I1205 06:23:37.643747   48520 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:23:37.643750   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643756   48520 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:23:37.643759   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643763   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643767   48520 command_runner.go:130] >       "size":  "20658969",
	I1205 06:23:37.643771   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643783   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643790   48520 command_runner.go:130] >       },
	I1205 06:23:37.643794   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643798   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643800   48520 command_runner.go:130] >     },
	I1205 06:23:37.643804   48520 command_runner.go:130] >     {
	I1205 06:23:37.643811   48520 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:23:37.643817   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643822   48520 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:23:37.643826   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643830   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643835   48520 command_runner.go:130] >       "size":  "22428165",
	I1205 06:23:37.643840   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643844   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643853   48520 command_runner.go:130] >     },
	I1205 06:23:37.643856   48520 command_runner.go:130] >     {
	I1205 06:23:37.643863   48520 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:23:37.643867   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643873   48520 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:23:37.643878   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643887   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643893   48520 command_runner.go:130] >       "size":  "15389290",
	I1205 06:23:37.643900   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643905   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643908   48520 command_runner.go:130] >       },
	I1205 06:23:37.643911   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643915   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643918   48520 command_runner.go:130] >     },
	I1205 06:23:37.643921   48520 command_runner.go:130] >     {
	I1205 06:23:37.644021   48520 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:23:37.644028   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.644033   48520 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:23:37.644036   48520 command_runner.go:130] >       ],
	I1205 06:23:37.644041   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.644045   48520 command_runner.go:130] >       "size":  "265458",
	I1205 06:23:37.644049   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.644056   48520 command_runner.go:130] >         "value":  "65535"
	I1205 06:23:37.644060   48520 command_runner.go:130] >       },
	I1205 06:23:37.644064   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.644075   48520 command_runner.go:130] >       "pinned":  true
	I1205 06:23:37.644078   48520 command_runner.go:130] >     }
	I1205 06:23:37.644081   48520 command_runner.go:130] >   ]
	I1205 06:23:37.644084   48520 command_runner.go:130] > }
	I1205 06:23:37.646462   48520 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:23:37.646482   48520 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:23:37.646489   48520 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:23:37.646588   48520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:23:37.646657   48520 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:23:37.674707   48520 command_runner.go:130] > {
	I1205 06:23:37.674726   48520 command_runner.go:130] >   "cniconfig": {
	I1205 06:23:37.674732   48520 command_runner.go:130] >     "Networks": [
	I1205 06:23:37.674735   48520 command_runner.go:130] >       {
	I1205 06:23:37.674741   48520 command_runner.go:130] >         "Config": {
	I1205 06:23:37.674745   48520 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1205 06:23:37.674752   48520 command_runner.go:130] >           "Name": "cni-loopback",
	I1205 06:23:37.674757   48520 command_runner.go:130] >           "Plugins": [
	I1205 06:23:37.674761   48520 command_runner.go:130] >             {
	I1205 06:23:37.674765   48520 command_runner.go:130] >               "Network": {
	I1205 06:23:37.674769   48520 command_runner.go:130] >                 "ipam": {},
	I1205 06:23:37.674775   48520 command_runner.go:130] >                 "type": "loopback"
	I1205 06:23:37.674779   48520 command_runner.go:130] >               },
	I1205 06:23:37.674785   48520 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1205 06:23:37.674788   48520 command_runner.go:130] >             }
	I1205 06:23:37.674792   48520 command_runner.go:130] >           ],
	I1205 06:23:37.674802   48520 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1205 06:23:37.674806   48520 command_runner.go:130] >         },
	I1205 06:23:37.674813   48520 command_runner.go:130] >         "IFName": "lo"
	I1205 06:23:37.674816   48520 command_runner.go:130] >       }
	I1205 06:23:37.674820   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674825   48520 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1205 06:23:37.674829   48520 command_runner.go:130] >     "PluginDirs": [
	I1205 06:23:37.674832   48520 command_runner.go:130] >       "/opt/cni/bin"
	I1205 06:23:37.674836   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674840   48520 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1205 06:23:37.674844   48520 command_runner.go:130] >     "Prefix": "eth"
	I1205 06:23:37.674846   48520 command_runner.go:130] >   },
	I1205 06:23:37.674850   48520 command_runner.go:130] >   "config": {
	I1205 06:23:37.674854   48520 command_runner.go:130] >     "cdiSpecDirs": [
	I1205 06:23:37.674858   48520 command_runner.go:130] >       "/etc/cdi",
	I1205 06:23:37.674862   48520 command_runner.go:130] >       "/var/run/cdi"
	I1205 06:23:37.674871   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674875   48520 command_runner.go:130] >     "cni": {
	I1205 06:23:37.674879   48520 command_runner.go:130] >       "binDir": "",
	I1205 06:23:37.674883   48520 command_runner.go:130] >       "binDirs": [
	I1205 06:23:37.674888   48520 command_runner.go:130] >         "/opt/cni/bin"
	I1205 06:23:37.674891   48520 command_runner.go:130] >       ],
	I1205 06:23:37.674895   48520 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1205 06:23:37.674899   48520 command_runner.go:130] >       "confTemplate": "",
	I1205 06:23:37.674903   48520 command_runner.go:130] >       "ipPref": "",
	I1205 06:23:37.674907   48520 command_runner.go:130] >       "maxConfNum": 1,
	I1205 06:23:37.674911   48520 command_runner.go:130] >       "setupSerially": false,
	I1205 06:23:37.674916   48520 command_runner.go:130] >       "useInternalLoopback": false
	I1205 06:23:37.674919   48520 command_runner.go:130] >     },
	I1205 06:23:37.674927   48520 command_runner.go:130] >     "containerd": {
	I1205 06:23:37.674932   48520 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1205 06:23:37.674937   48520 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1205 06:23:37.674942   48520 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1205 06:23:37.674946   48520 command_runner.go:130] >       "runtimes": {
	I1205 06:23:37.674950   48520 command_runner.go:130] >         "runc": {
	I1205 06:23:37.674955   48520 command_runner.go:130] >           "ContainerAnnotations": null,
	I1205 06:23:37.674959   48520 command_runner.go:130] >           "PodAnnotations": null,
	I1205 06:23:37.674965   48520 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1205 06:23:37.674969   48520 command_runner.go:130] >           "cgroupWritable": false,
	I1205 06:23:37.674974   48520 command_runner.go:130] >           "cniConfDir": "",
	I1205 06:23:37.674978   48520 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1205 06:23:37.674982   48520 command_runner.go:130] >           "io_type": "",
	I1205 06:23:37.674986   48520 command_runner.go:130] >           "options": {
	I1205 06:23:37.674990   48520 command_runner.go:130] >             "BinaryName": "",
	I1205 06:23:37.674994   48520 command_runner.go:130] >             "CriuImagePath": "",
	I1205 06:23:37.674998   48520 command_runner.go:130] >             "CriuWorkPath": "",
	I1205 06:23:37.675002   48520 command_runner.go:130] >             "IoGid": 0,
	I1205 06:23:37.675006   48520 command_runner.go:130] >             "IoUid": 0,
	I1205 06:23:37.675011   48520 command_runner.go:130] >             "NoNewKeyring": false,
	I1205 06:23:37.675018   48520 command_runner.go:130] >             "Root": "",
	I1205 06:23:37.675022   48520 command_runner.go:130] >             "ShimCgroup": "",
	I1205 06:23:37.675026   48520 command_runner.go:130] >             "SystemdCgroup": false
	I1205 06:23:37.675030   48520 command_runner.go:130] >           },
	I1205 06:23:37.675035   48520 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1205 06:23:37.675042   48520 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1205 06:23:37.675046   48520 command_runner.go:130] >           "runtimePath": "",
	I1205 06:23:37.675051   48520 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1205 06:23:37.675055   48520 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1205 06:23:37.675059   48520 command_runner.go:130] >           "snapshotter": ""
	I1205 06:23:37.675062   48520 command_runner.go:130] >         }
	I1205 06:23:37.675065   48520 command_runner.go:130] >       }
	I1205 06:23:37.675068   48520 command_runner.go:130] >     },
	I1205 06:23:37.675077   48520 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1205 06:23:37.675082   48520 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1205 06:23:37.675087   48520 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1205 06:23:37.675091   48520 command_runner.go:130] >     "disableApparmor": false,
	I1205 06:23:37.675096   48520 command_runner.go:130] >     "disableHugetlbController": true,
	I1205 06:23:37.675100   48520 command_runner.go:130] >     "disableProcMount": false,
	I1205 06:23:37.675104   48520 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1205 06:23:37.675108   48520 command_runner.go:130] >     "enableCDI": true,
	I1205 06:23:37.675112   48520 command_runner.go:130] >     "enableSelinux": false,
	I1205 06:23:37.675117   48520 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1205 06:23:37.675121   48520 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1205 06:23:37.675126   48520 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1205 06:23:37.675131   48520 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1205 06:23:37.675135   48520 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1205 06:23:37.675139   48520 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1205 06:23:37.675144   48520 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1205 06:23:37.675150   48520 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675154   48520 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1205 06:23:37.675159   48520 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675164   48520 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1205 06:23:37.675172   48520 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1205 06:23:37.675176   48520 command_runner.go:130] >   },
	I1205 06:23:37.675179   48520 command_runner.go:130] >   "features": {
	I1205 06:23:37.675184   48520 command_runner.go:130] >     "supplemental_groups_policy": true
	I1205 06:23:37.675187   48520 command_runner.go:130] >   },
	I1205 06:23:37.675190   48520 command_runner.go:130] >   "golang": "go1.24.9",
	I1205 06:23:37.675201   48520 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675211   48520 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675215   48520 command_runner.go:130] >   "runtimeHandlers": [
	I1205 06:23:37.675218   48520 command_runner.go:130] >     {
	I1205 06:23:37.675222   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675227   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675231   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675234   48520 command_runner.go:130] >       }
	I1205 06:23:37.675237   48520 command_runner.go:130] >     },
	I1205 06:23:37.675240   48520 command_runner.go:130] >     {
	I1205 06:23:37.675244   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675249   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675253   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675257   48520 command_runner.go:130] >       },
	I1205 06:23:37.675261   48520 command_runner.go:130] >       "name": "runc"
	I1205 06:23:37.675264   48520 command_runner.go:130] >     }
	I1205 06:23:37.675267   48520 command_runner.go:130] >   ],
	I1205 06:23:37.675270   48520 command_runner.go:130] >   "status": {
	I1205 06:23:37.675273   48520 command_runner.go:130] >     "conditions": [
	I1205 06:23:37.675277   48520 command_runner.go:130] >       {
	I1205 06:23:37.675280   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675284   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675288   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675292   48520 command_runner.go:130] >         "type": "RuntimeReady"
	I1205 06:23:37.675295   48520 command_runner.go:130] >       },
	I1205 06:23:37.675298   48520 command_runner.go:130] >       {
	I1205 06:23:37.675304   48520 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1205 06:23:37.675312   48520 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1205 06:23:37.675316   48520 command_runner.go:130] >         "status": false,
	I1205 06:23:37.675320   48520 command_runner.go:130] >         "type": "NetworkReady"
	I1205 06:23:37.675323   48520 command_runner.go:130] >       },
	I1205 06:23:37.675326   48520 command_runner.go:130] >       {
	I1205 06:23:37.675330   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675334   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675338   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675343   48520 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1205 06:23:37.675347   48520 command_runner.go:130] >       }
	I1205 06:23:37.675350   48520 command_runner.go:130] >     ]
	I1205 06:23:37.675353   48520 command_runner.go:130] >   }
	I1205 06:23:37.675356   48520 command_runner.go:130] > }
	I1205 06:23:37.675685   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:37.675695   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:37.675709   48520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:23:37.675732   48520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:23:37.675850   48520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:23:37.675917   48520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:23:37.682806   48520 command_runner.go:130] > kubeadm
	I1205 06:23:37.682826   48520 command_runner.go:130] > kubectl
	I1205 06:23:37.682831   48520 command_runner.go:130] > kubelet
	I1205 06:23:37.683692   48520 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:23:37.683790   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:23:37.691316   48520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:23:37.703871   48520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:23:37.716284   48520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 06:23:37.728952   48520 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:23:37.732950   48520 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:23:37.733083   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.845498   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:37.867115   48520 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:23:37.867139   48520 certs.go:195] generating shared ca certs ...
	I1205 06:23:37.867158   48520 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:37.867407   48520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:23:37.867492   48520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:23:37.867536   48520 certs.go:257] generating profile certs ...
	I1205 06:23:37.867696   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:23:37.867788   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:23:37.867863   48520 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:23:37.867878   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:23:37.867909   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:23:37.867937   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:23:37.867957   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:23:37.867990   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:23:37.868021   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:23:37.868041   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:23:37.868082   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:23:37.868158   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:23:37.868216   48520 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:23:37.868231   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:23:37.868276   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:23:37.868325   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:23:37.868373   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:23:37.868453   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:37.868510   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:37.868541   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem -> /usr/share/ca-certificates/4192.pem
	I1205 06:23:37.868568   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /usr/share/ca-certificates/41922.pem
	I1205 06:23:37.869214   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:23:37.888705   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:23:37.907292   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:23:37.928487   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:23:37.946435   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:23:37.964299   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:23:37.982113   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:23:37.999555   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:23:38.025054   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:23:38.044579   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:23:38.064934   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:23:38.085119   48520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:23:38.098666   48520 ssh_runner.go:195] Run: openssl version
	I1205 06:23:38.104661   48520 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:23:38.105114   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.112530   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:23:38.119940   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123892   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123985   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.124059   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.164658   48520 command_runner.go:130] > 51391683
	I1205 06:23:38.165135   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:23:38.172385   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.179652   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:23:38.187250   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190908   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190946   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190996   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.231356   48520 command_runner.go:130] > 3ec20f2e
	I1205 06:23:38.231428   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:23:38.238676   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.245835   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:23:38.252946   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256642   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256892   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256951   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.296975   48520 command_runner.go:130] > b5213941
	I1205 06:23:38.297434   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:23:38.304845   48520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308564   48520 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308587   48520 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:23:38.308594   48520 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1205 06:23:38.308601   48520 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:38.308607   48520 command_runner.go:130] > Access: 2025-12-05 06:19:31.018816392 +0000
	I1205 06:23:38.308612   48520 command_runner.go:130] > Modify: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308618   48520 command_runner.go:130] > Change: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308623   48520 command_runner.go:130] >  Birth: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308692   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:23:38.348984   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.349475   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:23:38.394714   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.395243   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:23:38.435818   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.436261   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:23:38.476805   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.477267   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:23:38.518071   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.518611   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:23:38.561014   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.561491   48520 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:38.561574   48520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:23:38.561660   48520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:23:38.588277   48520 cri.go:89] found id: ""
	I1205 06:23:38.588366   48520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:23:38.596406   48520 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:23:38.596430   48520 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:23:38.596438   48520 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:23:38.597543   48520 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:23:38.597605   48520 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:23:38.597685   48520 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:23:38.607655   48520 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:23:38.608093   48520 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.608241   48520 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "functional-101526" cluster setting kubeconfig missing "functional-101526" context setting]
	I1205 06:23:38.608622   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.609091   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.609324   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.609886   48520 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:23:38.610063   48520 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:23:38.610057   48520 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:23:38.610120   48520 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:23:38.610139   48520 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:23:38.610175   48520 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:23:38.610495   48520 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:23:38.619299   48520 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:23:38.619367   48520 kubeadm.go:602] duration metric: took 21.74243ms to restartPrimaryControlPlane
	I1205 06:23:38.619392   48520 kubeadm.go:403] duration metric: took 57.910865ms to StartCluster
	I1205 06:23:38.619420   48520 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.619502   48520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.620189   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.620458   48520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 06:23:38.620608   48520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:23:38.620940   48520 addons.go:70] Setting storage-provisioner=true in profile "functional-101526"
	I1205 06:23:38.621064   48520 addons.go:239] Setting addon storage-provisioner=true in "functional-101526"
	I1205 06:23:38.621113   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.620703   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:38.621254   48520 addons.go:70] Setting default-storageclass=true in profile "functional-101526"
	I1205 06:23:38.621267   48520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-101526"
	I1205 06:23:38.621543   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.621837   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.622827   48520 out.go:179] * Verifying Kubernetes components...
	I1205 06:23:38.624023   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:38.667927   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.668094   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.668372   48520 addons.go:239] Setting addon default-storageclass=true in "functional-101526"
	I1205 06:23:38.668400   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.668811   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.682967   48520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:23:38.684152   48520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.684170   48520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:23:38.684236   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.712186   48520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:38.712208   48520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:23:38.712271   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.728758   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.759681   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.830869   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:38.880502   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.894150   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.597389   48520 node_ready.go:35] waiting up to 6m0s for node "functional-101526" to be "Ready" ...
	I1205 06:23:39.597462   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597505   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597540   48520 retry.go:31] will retry after 347.041569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597590   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597614   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597624   48520 retry.go:31] will retry after 291.359395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:23:39.597730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:39.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:39.889264   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.945727   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:39.950448   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.950487   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.950523   48520 retry.go:31] will retry after 542.352885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018611   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.018720   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018748   48520 retry.go:31] will retry after 498.666832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.098033   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.098325   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.493962   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:40.518418   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:40.562108   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.562226   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.562260   48520 retry.go:31] will retry after 406.138698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588025   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.588062   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588081   48520 retry.go:31] will retry after 594.532888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.598248   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.598327   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.598636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.969306   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.034172   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.037396   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.037482   48520 retry.go:31] will retry after 875.411269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.098568   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.098689   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.098986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:41.183391   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:41.246665   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.246713   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.246732   48520 retry.go:31] will retry after 928.241992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.598231   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.598321   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:41.598695   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:41.913216   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.971936   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.975346   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.975382   48520 retry.go:31] will retry after 1.177811903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:42.175570   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:42.247042   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:42.247165   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.247197   48520 retry.go:31] will retry after 1.26909991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.598419   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.598544   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.598893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.097717   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.098051   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.154349   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:43.214165   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.217853   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.217885   48520 retry.go:31] will retry after 2.752289429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.517328   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:43.580346   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.580405   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.580434   48520 retry.go:31] will retry after 2.299289211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.598503   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.598628   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.598995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:43.599083   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:44.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.098502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.098803   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:44.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.597856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.097813   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.097918   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.098342   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.597661   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.880606   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:45.938914   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:45.938948   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.938966   48520 retry.go:31] will retry after 2.215203034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.971116   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:46.035840   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:46.035877   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.035895   48520 retry.go:31] will retry after 2.493998942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.098074   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.098239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.098559   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:46.098611   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:46.598405   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.598501   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.598815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.098358   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.098432   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.098766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.598407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.598667   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:48.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.098899   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:48.098950   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:48.155209   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:48.214464   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.214512   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.214531   48520 retry.go:31] will retry after 5.617095307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.530967   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:48.587770   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.587811   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.587831   48520 retry.go:31] will retry after 3.714896929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.598174   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.598240   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.598490   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.098439   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.098511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.597635   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.597708   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.097641   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.098020   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.598128   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:50.598177   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:51.097653   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.097726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:51.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.598434   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.598708   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.098476   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.098552   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.098854   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.303312   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:52.364380   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:52.367543   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.367573   48520 retry.go:31] will retry after 3.56011918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.597990   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.598059   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.598330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:52.598370   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:53.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.097720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.097995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.598131   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.832691   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:53.932471   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:53.935567   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:53.935601   48520 retry.go:31] will retry after 7.968340753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:54.098032   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.098119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.098504   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:54.598332   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.598408   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.598700   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:54.598750   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:55.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.098636   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.598452   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.598735   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.928461   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:55.985797   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:55.985849   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:55.985868   48520 retry.go:31] will retry after 13.95380646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:56.098043   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.098142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:56.598257   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.598332   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.598591   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:57.098338   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.098418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:57.098806   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:57.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.598727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.098565   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.098653   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.098993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.597995   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.598071   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.598388   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:59.598441   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:00.097798   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.097895   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.098216   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:00.598109   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.598187   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.598469   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.098232   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.098656   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.598378   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.598756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:01.598798   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:01.904244   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:01.963282   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:01.966528   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:01.966559   48520 retry.go:31] will retry after 12.949527151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:02.097647   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.098069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:02.597723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.597819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.598178   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.097745   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.098222   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.597893   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.597959   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.598249   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:04.097760   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.098267   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:04.098317   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:04.598025   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.598124   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.598425   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.098484   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.098557   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.098824   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.598589   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.598684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.599025   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.098166   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.597592   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.597662   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.597933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:06.597973   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:07.098457   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.098530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.098893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:07.598367   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.598458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.598757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.098344   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.098429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.098757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.598492   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.598559   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.598841   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:08.598881   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:09.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.097973   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.098345   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.598107   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.598174   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.598441   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.939938   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:09.995364   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:09.998554   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:09.998588   48520 retry.go:31] will retry after 16.114489594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:10.097931   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.098044   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.098385   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:10.598110   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.598191   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.598513   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:11.098275   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.098615   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:11.098670   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:11.598400   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.598740   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.098540   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.098616   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.097746   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.097819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.098163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.597759   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.597834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.598122   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:13.598175   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:14.097627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.097709   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.597953   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.598020   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.598324   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.916824   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:14.975576   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:14.975628   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:14.975646   48520 retry.go:31] will retry after 12.242306889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:15.097909   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.098005   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.098359   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:15.597934   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.598277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:15.598320   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:16.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:16.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.597791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.598100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.098010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.597774   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.597845   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.598218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:18.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:18.098183   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:18.598335   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.598405   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.598680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.098583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.098655   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.597882   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.597965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.598257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:20.097767   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.097837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.098151   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:20.098210   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:20.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.597821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.598163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.097868   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.097944   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.597748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:22.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.097863   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:22.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:22.597927   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.598018   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.097757   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.097834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.098165   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.598412   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:24.598451   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:25.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.097818   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.098201   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:25.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.598206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.097703   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.114242   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:26.182245   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:26.182291   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.182309   48520 retry.go:31] will retry after 20.133806896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.597729   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.597815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:27.097723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:27.098168   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:27.218635   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:27.278311   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:27.278351   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.278369   48520 retry.go:31] will retry after 29.943294063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.597675   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.597766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.598047   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.097690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.098089   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.597760   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.598077   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.597938   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.598028   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.598339   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:29.598384   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:30.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:30.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.097803   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.098330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.597811   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.598159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:32.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:32.098247   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:32.598587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.598658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.097615   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.097683   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.098041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.598348   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.598685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:34.098505   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.098598   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.098917   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:34.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:34.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.598097   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.098294   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.598401   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.598478   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.598810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:36.098627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.098700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.099015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:36.099064   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:36.597658   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.598106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.098117   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.598093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.098206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.597747   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:38.598117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:39.097836   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.097928   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.098334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:39.598071   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.598143   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.598413   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.098336   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.098679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.598808   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:40.598849   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:41.098353   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.098417   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.098669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:41.598525   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.598609   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.597659   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:43.097673   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.098074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:43.098136   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:43.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.597761   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.098370   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.098629   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.598627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.598699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.599010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:45.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.097907   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:45.098408   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:45.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.597740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.098586   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.098659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.098977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.316378   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:46.382136   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:46.385605   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.385642   48520 retry.go:31] will retry after 25.45198813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.598118   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.598219   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.598522   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:47.098288   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.098354   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.098627   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:47.098682   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:47.598404   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.598746   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.098648   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.099013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.598372   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.598439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.598709   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:49.098599   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.099061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:49.099113   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.598014   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.598306   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.097691   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.598564   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.598829   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.097583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.097659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.098037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.598325   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.598399   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:51.598761   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:52.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.098621   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.098978   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.597773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.097590   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.097657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.097905   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.597594   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.597666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.597973   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:54.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:54.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:54.597977   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.598054   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.598305   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.097821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.598396   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.598475   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:56.098321   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.098407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.098685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:56.098727   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:56.598502   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.598876   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.097587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.097675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.097966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.222289   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:57.284849   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:57.284890   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.284910   48520 retry.go:31] will retry after 41.469992375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.598343   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.598669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:58.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.098574   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.098880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:58.098930   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:58.597606   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.597675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.098608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.098916   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.597662   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.097620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.097697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.598474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:00.598791   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:01.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.099039   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:01.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.597775   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.598053   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.098050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:03.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.097804   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.098169   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:03.098231   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:03.597623   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.597691   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.097739   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.098119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.597929   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.598003   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:05.098361   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.098426   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:05.098730   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:05.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.598783   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.098625   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.098705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.099060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.598425   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.598694   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:07.098518   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:07.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:07.597640   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.598023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.097648   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.098028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.597762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.097853   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.598150   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.598411   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:09.598454   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:10.097719   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:10.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.598121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.097959   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.838548   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:25:11.913959   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914006   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914113   48520 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:12.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.098446   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.098756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:12.098805   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:12.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.598398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.598661   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.098442   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.098525   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.598638   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.599017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.097669   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.098009   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.597973   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.598377   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:14.598425   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:15.098092   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.098173   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.098548   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:15.598315   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.598383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.598676   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.098414   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.098500   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.098815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.598530   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.598606   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.598956   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:16.599009   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:17.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.097731   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:17.597681   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.097848   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.098264   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.597980   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.598079   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.598336   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:19.098419   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.098509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:19.098915   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:19.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.097977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.597733   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.097758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.098096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.598679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:21.598737   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:22.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.098935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:22.597673   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.098687   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:23.598843   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:24.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.098699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.099069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:24.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.597963   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.598230   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.097788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.098109   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.597831   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.597926   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:26.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.097972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:26.098033   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:26.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.097823   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.097896   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.597972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:28.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.098036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:28.098084   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:28.597722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.598154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.097749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.098021   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.597987   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.598315   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:30.098008   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.098085   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.098479   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:30.098542   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:30.598023   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.598099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.598365   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.097739   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.098082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.597660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.598050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.097729   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.097985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.597714   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:32.598157   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:33.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:33.597803   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.597872   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.598133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.098121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.598211   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.598290   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.598585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:34.598631   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:35.098390   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.098471   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:35.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.598657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.598992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.097793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.598358   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.598693   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:36.598731   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:37.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.098568   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.098894   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:37.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.598057   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.599817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:25:38.097679   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:38.598262   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.598388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:38.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:38.755357   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:25:38.811504   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811556   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811634   48520 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:38.813895   48520 out.go:179] * Enabled addons: 
	I1205 06:25:38.815272   48520 addons.go:530] duration metric: took 2m0.19467206s for enable addons: enabled=[]
	I1205 06:25:39.097850   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.097947   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.098277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.098242   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.098311   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.098643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.598717   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:41.098378   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.098451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:41.098817   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:41.598539   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.598608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.598921   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.097686   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.597727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.598041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.097789   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.097885   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.098205   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.597913   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.597988   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:43.598385   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:44.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.097735   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.098040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:44.598010   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.598096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.098016   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.098099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.098496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.597755   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.597830   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.598148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:46.097840   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.097939   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.098311   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:46.098366   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:46.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.598111   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.598421   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.098155   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.098226   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.098489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.598715   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:48.098525   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:48.099014   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:48.598319   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.598387   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.598646   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.098618   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.098694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.099074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.597928   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.598344   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.098007   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.098092   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.098397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.598202   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.598496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:50.598545   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:51.098285   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.098357   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:51.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.098404   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.098477   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.098809   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.598593   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.598670   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.598948   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:52.598996   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:53.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.097712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:53.597701   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.597798   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.598156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.097890   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.097965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.098294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.598153   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.598231   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.598502   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:55.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.098413   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.098774   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:55.098829   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:55.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.598649   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.598924   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.098373   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.098641   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.598457   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.598530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:57.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.098633   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:57.098974   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:57.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.598416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.598776   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.098490   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.098937   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.598427   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.598511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.598848   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.597568   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.597645   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.597976   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:59.598030   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:00.098475   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.098940   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:00.598316   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.598385   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.598643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.098402   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.098479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.098749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:01.598947   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:02.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.098398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.098727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:02.598537   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.598620   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.598964   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.098104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.598364   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.598437   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.598722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:04.098558   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.098639   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.099052   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:04.099124   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:04.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.597957   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.097925   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.097993   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.597982   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.598052   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.598387   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.098263   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.598455   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.598714   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:06.598754   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:07.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.098585   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.098898   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:07.597622   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.097980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.597757   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.598102   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:09.097870   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.097951   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.098248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:09.098294   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:09.598112   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.598239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.598574   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.098411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.098493   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.597577   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.597650   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.597981   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:11.098430   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.098504   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:11.098808   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:11.598510   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.598593   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.598863   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.097604   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.097676   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.097998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.597698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.098093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.597735   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.597806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:13.598192   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:14.098341   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.098414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:14.597561   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.597634   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.597953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.097674   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.597858   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.597937   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:15.598252   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:16.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.098136   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:16.597837   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.598258   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.097730   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.097795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:18.097743   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.097833   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:18.098240   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:18.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.598040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.097951   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.098029   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.598489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:20.098213   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.098283   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.098535   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:20.098577   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:20.598411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.598481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.098642   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.098953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.598371   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.598445   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:22.098546   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.098626   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.098949   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:22.099002   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:22.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.597742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.598072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.098292   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.098363   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.098623   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.598300   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.598378   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.598681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.098570   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.098890   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.597874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.597942   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.598193   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:24.598235   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:25.097954   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.098049   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.098380   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:25.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.098335   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.098599   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:26.598819   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:27.098598   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.098666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.098997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:27.598342   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.598674   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.098464   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.098548   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.098911   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.598054   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:29.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:29.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:29.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.598512   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.098722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.598362   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.598429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.598778   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:31.098515   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.098594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.098941   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:31.099003   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:31.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.598069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.097668   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.097740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.597672   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.598073   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.098170   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.598322   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.598390   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:33.598681   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:34.098437   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.098514   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.098910   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:34.597764   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.598152   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.097815   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.097898   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.598058   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:36.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.098272   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:36.098329   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:36.597583   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.597647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.597901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.097624   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.597700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.097738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.597805   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.598144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:38.598199   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:39.097874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.097953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.098299   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.598120   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.598381   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.598235   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:40.598299   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:41.097613   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.097684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.097934   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:41.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.598095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.097741   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.098252   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.597939   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.598259   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:43.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.098098   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:43.098152   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:43.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.597750   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.097834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.598119   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.598510   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:45.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:45.098935   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:45.598338   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.598404   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.598666   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.098497   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.098980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.597691   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.098061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.598104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:47.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:48.097827   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:48.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.597728   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.597996   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.597943   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.598016   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.598353   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:49.598407   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:50.097817   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.097883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:50.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.597641   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.597986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:52.097689   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.098130   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:52.098200   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.598147   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.097621   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.097992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:54.097842   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.097924   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:54.098348   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:54.598061   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.598132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.097700   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.598059   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:56.098320   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.098388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.098645   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:56.098686   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:56.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.598594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.598880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.097600   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.097674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.097997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:58.098426   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.098498   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.098810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:58.098866   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:58.597573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.597644   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.597980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.098351   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.098416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.098680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.598057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:00.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.099364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:27:00.099443   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:00.598194   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.598268   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.598536   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.098258   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.098330   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.598444   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.598519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.098423   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.098519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.098885   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.597608   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.597679   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.598006   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:02.598063   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:03.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:03.597631   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.598018   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.098154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:04.598512   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:05.098189   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.098594   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:05.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.598766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.098531   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.098612   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.598739   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:06.598794   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:07.098498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.098896   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:07.598506   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.598576   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.598842   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.098375   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.098481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.098796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.598436   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.598506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.598870   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:08.598917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:09.098638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.098713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.099031   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:09.598009   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.598075   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.098104   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.098192   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.098576   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.598351   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.598430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.598734   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:11.098387   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.098458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.098711   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:11.098748   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:11.598498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.598845   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.097611   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.098027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.597725   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.597777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.598079   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:13.598137   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:14.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:14.598045   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.598450   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.098264   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.098347   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.598409   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.598702   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:15.598770   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:16.098516   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.098587   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.098908   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:16.598224   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.598302   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.598621   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.098312   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.098380   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.598428   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.598505   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.598807   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:17.598861   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:18.098642   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.098716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.099040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:18.597649   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.597719   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.598036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.097919   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.097999   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.098304   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.598170   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:20.098308   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:20.098698   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:20.598481   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.598549   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.097999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.597636   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.097711   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.098159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.597596   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.597665   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.597997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:22.598069   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:23.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.097723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.098062   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.598043   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.097740   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.097814   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.598060   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.598131   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:24.598468   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:25.098259   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.098337   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.098681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:25.598479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.598553   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.598817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.098403   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.598420   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.598491   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:26.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:27.098591   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.098669   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:27.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.597699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.097670   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.098120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.597838   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.597913   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.598248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:29.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.097724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.097974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:29.098015   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:29.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.598034   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.097797   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.098177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.597863   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.597934   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.598220   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:31.097682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.098095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:31.098146   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:31.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.597758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.598081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.097685   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.597761   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:33.097892   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.097972   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.098291   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:33.098349   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:33.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.598028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.598024   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.598102   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:35.098221   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.098585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:35.098636   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:35.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.598796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.098479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.098560   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.098901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.598360   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.598431   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:37.098449   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:37.098917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:37.598548   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.598615   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.598889   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.100439   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.100533   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.100821   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.598323   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.598665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:39.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.098663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.099057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:39.099121   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:39.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.098100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.597799   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.597871   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.598186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.097664   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.097730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.097993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.597682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:41.598120   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:42.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.098183   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:42.597650   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.098084   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.598402   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.598480   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.598777   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:43.598825   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:44.098381   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.098721   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:44.597605   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.597678   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.097828   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.098242   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.597563   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.597635   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.597935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:46.097599   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.097672   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.097994   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:46.098054   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:46.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.098397   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.098474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.098743   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.598385   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.598785   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:48.098501   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.098580   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.098912   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:48.098971   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:48.598554   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.598624   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.598891   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.097780   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.098243   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.598130   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.598205   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.098061   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.098132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.098478   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.598272   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.598348   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:50.598692   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:51.098399   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.098484   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.098838   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:51.598356   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.598698   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.098686   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.098782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.099141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.597853   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.597931   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.598221   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:53.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.098015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:53.597727   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.597807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.097889   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.097964   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.598052   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.598384   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:55.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.098128   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.098471   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:55.098525   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:55.598047   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.598443   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.098223   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.098308   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.098582   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.598347   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.598418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.598724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:57.098522   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.098600   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.098946   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:57.099013   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:57.597645   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.597724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.598038   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.097753   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.098035   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.598043   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.598109   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.598433   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:59.598492   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:00.098337   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.098427   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.098788   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:00.598433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.098654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.098740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.099090   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:02.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.097737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.098064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:02.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:02.597688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.597667   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.597737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.597990   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.098055   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.597984   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.598390   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:04.598444   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:05.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:05.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.597765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:07.097703   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:07.098142   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:07.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.597911   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.598223   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.098023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.597760   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.598171   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:09.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.098013   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.098328   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:09.098389   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:09.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.598116   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.598364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.097762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.098113   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.597798   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.597870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.598188   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.097802   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.597864   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.598173   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:11.598226   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:12.097903   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.097983   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.098374   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:12.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.098032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:14.097811   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.097887   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.098156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:14.098195   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:14.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.598112   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.598466   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.097733   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.098140   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.597813   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.597884   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.598142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.097796   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.598101   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:16.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:17.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.097876   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.098194   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:17.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.597756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.098215   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.597886   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.597953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:18.598246   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:19.098191   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.098596   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:19.598092   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.598164   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.598453   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.098276   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.098366   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.098647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.598966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:20.599023   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:21.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.097783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:21.597816   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.597890   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.598175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.097699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.097778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.597810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.597883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:23.097913   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.097992   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.098257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:23.098301   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:23.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.097782   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.598076   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.598142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.598394   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:25.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.098130   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.098501   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:25.098557   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:25.598048   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.598119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.598461   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.098278   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.098345   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.098636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.598407   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.598479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:27.098588   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.098668   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.099022   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:27.099091   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:27.598346   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.598675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.098428   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.098506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.098818   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.598580   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.598652   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.598974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.097745   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.598001   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.598100   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.598428   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:29.598481   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:30.098006   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.098087   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:30.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.598160   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.097774   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.098181   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.597850   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.597930   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.598261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:32.097657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.097732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.098067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:32.098128   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:32.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.597865   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.598198   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.097897   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.097968   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.098282   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.597749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.597992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.597944   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.598021   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.598350   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:34.598404   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:35.097649   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:35.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.598762   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.098647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.098983   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.598412   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.598488   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.598831   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:36.598888   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:37.098658   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.098727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.099076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:37.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.598120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.097852   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.098158   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.597763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.598087   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:39.097696   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.097815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:39.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:39.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.598118   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.598367   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.097715   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.098133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.597711   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.597788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.598088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.097630   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.097700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:41.598088   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:42.097792   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.098293   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:42.597741   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.097766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:43.598827   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:44.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.098438   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:44.598631   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.598985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.097712   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.097807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.098219   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.597940   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.598275   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:46.097986   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.098060   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.098414   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:46.098473   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:46.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.598322   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.098433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.098851   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.598595   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.598663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.598967   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.097777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.098143   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.598005   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:48.598051   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:49.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.097752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.098085   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.598007   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.598332   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.097650   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.097722   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.098001   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.598180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:50.598236   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:51.097912   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.097985   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.098261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:51.597646   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.597720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.598030   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.097764   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:53.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.097704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:53.597719   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.597789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.098214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.597970   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.598039   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:55.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.098096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:55.098479   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:55.598243   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.598312   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.598632   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.098392   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.598446   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.598523   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.598834   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:57.098623   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.098697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.099008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:57.099062   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:57.597638   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.597977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.597925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.598287   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.098257   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.098326   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.098588   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.598617   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.598687   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.598989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:59.599048   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:00.097762   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.098260   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:00.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.098107   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.597703   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:02.097797   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.098139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:02.098179   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:02.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.597901   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.598200   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.097768   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.597806   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.597879   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.598126   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.597968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.598041   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.598369   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:04.598426   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:05.097837   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.097910   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.098172   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:05.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.597991   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.598317   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.597807   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.597874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.598213   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:07.097658   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.097727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:07.098053   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:07.597689   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.598111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.098144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.597825   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.597894   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.598214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:09.098293   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.098665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:09.098713   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:09.598091   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.598166   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.598438   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.098197   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.098285   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.598426   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.598502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.598789   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.098356   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.098424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.598534   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.598933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:11.598983   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:12.097671   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.097742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.098072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:12.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.598000   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.597778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:14.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.098625   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:14.098664   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:14.598601   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.598674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.598962   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.597712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.597989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:16.598176   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:17.097800   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.098186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:17.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.598071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:19.097692   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.098110   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:19.098171   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:19.597903   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.597976   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.598294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.097887   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.097966   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.098232   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.598138   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:21.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.097904   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.098238   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:21.098290   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:21.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.597989   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.598308   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.097687   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.098076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.597793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.097639   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.098012   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.598468   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.598537   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.598805   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:23.598850   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:24.098628   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.098711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.099042   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:24.598066   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.598140   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.598436   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.098234   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.098304   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.098635   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.598442   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.598521   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.598813   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:26.098310   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.098634   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:26.098672   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:26.598467   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.598836   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.098604   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.098674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.099002   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.597627   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.597979   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.097729   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.097806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.098118   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:28.598160   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:29.097982   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.098056   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:29.598171   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.598241   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.598550   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.098366   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.098794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.598344   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.598414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.598658   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:30.598696   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:31.098524   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.098930   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:31.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.598056   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.598096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:33.097781   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.097856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.098197   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:33.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:33.598550   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.598869   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.097577   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.097658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.097965   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.597894   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.597969   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:35.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.098046   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.098335   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:35.098382   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:35.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.597795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.598375   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.597753   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.597820   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.598067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.597846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.598191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:37.598248   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:38.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.097981   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.098280   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:38.597967   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.598047   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.598406   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.098350   48520 type.go:168] "Request Body" body=""
	I1205 06:29:39.098430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:39.098781   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.598588   48520 node_ready.go:38] duration metric: took 6m0.001106708s for node "functional-101526" to be "Ready" ...
	I1205 06:29:39.600415   48520 out.go:203] 
	W1205 06:29:39.601638   48520 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:29:39.601661   48520 out.go:285] * 
	W1205 06:29:39.603936   48520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:29:39.604891   48520 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470691195Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470703101Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470714654Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470726797Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470745218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470756976Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470774921Z" level=info msg="runtime interface created"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470780665Z" level=info msg="created NRI interface"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470789370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470817637Z" level=info msg="Connect containerd service"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.471185329Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.471704882Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.492418076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.492490028Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.492776463Z" level=info msg="Start subscribing containerd event"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.492877305Z" level=info msg="Start recovering state"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514376443Z" level=info msg="Start event monitor"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514429949Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514446794Z" level=info msg="Start streaming server"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514459767Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514468915Z" level=info msg="runtime interface starting up..."
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514479713Z" level=info msg="starting plugins..."
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514491184Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:23:37 functional-101526 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.520933722Z" level=info msg="containerd successfully booted in 0.070389s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:29:41.762730    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:41.763316    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:41.764413    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:41.764970    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:41.766523    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:29:41 up  1:12,  0 user,  load average: 0.04, 0.23, 0.50
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:29:38 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:38 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 05 06:29:38 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:38 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:38 functional-101526 kubelet[8902]: E1205 06:29:38.890330    8902 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:38 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:38 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:39 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 05 06:29:39 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:39 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:39 functional-101526 kubelet[8907]: E1205 06:29:39.697620    8907 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:39 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:39 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:40 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 05 06:29:40 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:40 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:40 functional-101526 kubelet[8913]: E1205 06:29:40.415001    8913 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:40 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:40 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 05 06:29:41 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:41 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:41 functional-101526 kubelet[8934]: E1205 06:29:41.159852    8934 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (350.268652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-101526 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-101526 get po -A: exit status 1 (57.788379ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-101526 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-101526 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-101526 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (302.299207ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-226068 ssh sudo cat /etc/ssl/certs/41922.pem                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ ssh            │ functional-226068 ssh sudo cat /usr/share/ca-certificates/41922.pem                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ ssh            │ functional-226068 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ update-context │ functional-226068 update-context --alsologtostderr -v=2                                                                                                         │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image save kicbase/echo-server:functional-226068 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image rm kicbase/echo-server:functional-226068 --alsologtostderr                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ image          │ functional-226068 image save --daemon kicbase/echo-server:functional-226068 --alsologtostderr                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format short --alsologtostderr                                                                                                     │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format yaml --alsologtostderr                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format json --alsologtostderr                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls --format table --alsologtostderr                                                                                                     │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ ssh            │ functional-226068 ssh pgrep buildkitd                                                                                                                           │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ image          │ functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr                                                          │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image          │ functional-226068 image ls                                                                                                                                      │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ delete         │ -p functional-226068                                                                                                                                            │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ start          │ -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ start          │ -p functional-101526 --alsologtostderr -v=8                                                                                                                     │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:23:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:23:34.555640   48520 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:23:34.555757   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.555768   48520 out.go:374] Setting ErrFile to fd 2...
	I1205 06:23:34.555773   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.556051   48520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:23:34.556413   48520 out.go:368] Setting JSON to false
	I1205 06:23:34.557238   48520 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3961,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:23:34.557311   48520 start.go:143] virtualization:  
	I1205 06:23:34.559039   48520 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:23:34.560249   48520 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:23:34.560305   48520 notify.go:221] Checking for updates...
	I1205 06:23:34.562854   48520 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:23:34.564039   48520 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:34.565137   48520 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:23:34.566333   48520 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:23:34.567598   48520 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:23:34.569245   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:34.569354   48520 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:23:34.590301   48520 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:23:34.590415   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.653386   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.643338894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.653494   48520 docker.go:319] overlay module found
	I1205 06:23:34.655010   48520 out.go:179] * Using the docker driver based on existing profile
	I1205 06:23:34.656153   48520 start.go:309] selected driver: docker
	I1205 06:23:34.656167   48520 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.656269   48520 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:23:34.656363   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.713521   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.704040472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.713916   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:34.713979   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:34.714025   48520 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.715459   48520 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:23:34.716546   48520 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:23:34.717743   48520 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:23:34.719027   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:34.719180   48520 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:23:34.738218   48520 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:23:34.738240   48520 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:23:34.779237   48520 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:23:34.998431   48520 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:23:34.998624   48520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:23:34.998714   48520 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998796   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:23:34.998805   48520 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.154µs
	I1205 06:23:34.998818   48520 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:23:34.998828   48520 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998857   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:23:34.998862   48520 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.504µs
	I1205 06:23:34.998868   48520 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998878   48520 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998890   48520 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:23:34.998904   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:23:34.998909   48520 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.361µs
	I1205 06:23:34.998916   48520 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998919   48520 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998925   48520 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998953   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:23:34.998955   48520 start.go:364] duration metric: took 23.967µs to acquireMachinesLock for "functional-101526"
	I1205 06:23:34.998958   48520 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.961µs
	I1205 06:23:34.998965   48520 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998968   48520 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:23:34.998973   48520 fix.go:54] fixHost starting: 
	I1205 06:23:34.998973   48520 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999001   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:23:34.999006   48520 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.323µs
	I1205 06:23:34.999012   48520 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:23:34.999020   48520 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999055   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:23:34.999060   48520 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.108µs
	I1205 06:23:34.999066   48520 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:23:34.999076   48520 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999117   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:23:34.999122   48520 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.426µs
	I1205 06:23:34.999127   48520 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:23:34.999135   48520 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999162   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:23:34.999167   48520 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.427µs
	I1205 06:23:34.999172   48520 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:23:34.999180   48520 cache.go:87] Successfully saved all images to host disk.
	I1205 06:23:34.999246   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:35.021908   48520 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:23:35.021948   48520 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:23:35.023534   48520 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:23:35.023573   48520 machine.go:94] provisionDockerMachine start ...
	I1205 06:23:35.023662   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.041007   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.041395   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.041419   48520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:23:35.188597   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.188620   48520 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:23:35.188686   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.205143   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.205585   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.205604   48520 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:23:35.361531   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.361628   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.381210   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.381606   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.381630   48520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:23:35.529415   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:23:35.529441   48520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:23:35.529467   48520 ubuntu.go:190] setting up certificates
	I1205 06:23:35.529477   48520 provision.go:84] configureAuth start
	I1205 06:23:35.529543   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:35.549800   48520 provision.go:143] copyHostCerts
	I1205 06:23:35.549840   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549879   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:23:35.549910   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549992   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:23:35.550081   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550102   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:23:35.550111   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550138   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:23:35.550192   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550212   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:23:35.550220   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550244   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:23:35.550303   48520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:23:35.896062   48520 provision.go:177] copyRemoteCerts
	I1205 06:23:35.896131   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:23:35.896172   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.915295   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.022077   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:23:36.022150   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:23:36.041535   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:23:36.041647   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:23:36.060235   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:23:36.060320   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:23:36.078423   48520 provision.go:87] duration metric: took 548.924199ms to configureAuth
	I1205 06:23:36.078451   48520 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:23:36.078638   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:36.078652   48520 machine.go:97] duration metric: took 1.055064213s to provisionDockerMachine
	I1205 06:23:36.078660   48520 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:23:36.078671   48520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:23:36.078720   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:23:36.078768   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.096049   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.200907   48520 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:23:36.204162   48520 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:23:36.204182   48520 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:23:36.204187   48520 command_runner.go:130] > VERSION_ID="12"
	I1205 06:23:36.204192   48520 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:23:36.204196   48520 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:23:36.204200   48520 command_runner.go:130] > ID=debian
	I1205 06:23:36.204205   48520 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:23:36.204210   48520 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:23:36.204232   48520 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:23:36.204297   48520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:23:36.204316   48520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:23:36.204326   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:23:36.204380   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:23:36.204473   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:23:36.204485   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /etc/ssl/certs/41922.pem
	I1205 06:23:36.204565   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:23:36.204573   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> /etc/test/nested/copy/4192/hosts
	I1205 06:23:36.204620   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:23:36.211988   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:36.229308   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:23:36.246073   48520 start.go:296] duration metric: took 167.399532ms for postStartSetup
	I1205 06:23:36.246163   48520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:23:36.246202   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.262461   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.366102   48520 command_runner.go:130] > 13%
	I1205 06:23:36.366647   48520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:23:36.370745   48520 command_runner.go:130] > 169G
	I1205 06:23:36.371285   48520 fix.go:56] duration metric: took 1.372308275s for fixHost
	I1205 06:23:36.371306   48520 start.go:83] releasing machines lock for "functional-101526", held for 1.37234313s
	I1205 06:23:36.371420   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:36.390415   48520 ssh_runner.go:195] Run: cat /version.json
	I1205 06:23:36.390468   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.391053   48520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:23:36.391113   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.419642   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.424516   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.520794   48520 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:23:36.520923   48520 ssh_runner.go:195] Run: systemctl --version
	I1205 06:23:36.606649   48520 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:23:36.609416   48520 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:23:36.609453   48520 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:23:36.609534   48520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:23:36.613918   48520 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:23:36.613964   48520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:23:36.614023   48520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:23:36.621686   48520 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:23:36.621710   48520 start.go:496] detecting cgroup driver to use...
	I1205 06:23:36.621769   48520 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:23:36.621841   48520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:23:36.637331   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:23:36.650267   48520 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:23:36.650327   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:23:36.665934   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:23:36.679279   48520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:23:36.785775   48520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:23:36.894469   48520 docker.go:234] disabling docker service ...
	I1205 06:23:36.894545   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:23:36.910313   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:23:36.923239   48520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:23:37.033287   48520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:23:37.168163   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:23:37.180578   48520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:23:37.193942   48520 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:23:37.194023   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:23:37.202471   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:23:37.211003   48520 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:23:37.211119   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:23:37.219839   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.228562   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:23:37.237276   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.245970   48520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:23:37.253895   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:23:37.262450   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:23:37.271505   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:23:37.280464   48520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:23:37.287174   48520 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:23:37.288154   48520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:23:37.295694   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.408389   48520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:23:37.517122   48520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:23:37.517255   48520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:23:37.521337   48520 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1205 06:23:37.521369   48520 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:23:37.521389   48520 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1205 06:23:37.521397   48520 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:37.521404   48520 command_runner.go:130] > Access: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521409   48520 command_runner.go:130] > Modify: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521418   48520 command_runner.go:130] > Change: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521422   48520 command_runner.go:130] >  Birth: -
	I1205 06:23:37.521666   48520 start.go:564] Will wait 60s for crictl version
	I1205 06:23:37.521723   48520 ssh_runner.go:195] Run: which crictl
	I1205 06:23:37.524716   48520 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:23:37.525219   48520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:23:37.548325   48520 command_runner.go:130] > Version:  0.1.0
	I1205 06:23:37.548510   48520 command_runner.go:130] > RuntimeName:  containerd
	I1205 06:23:37.548666   48520 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1205 06:23:37.548827   48520 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:23:37.551185   48520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:23:37.551250   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.571456   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.573276   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.591907   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.597675   48520 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:23:37.598882   48520 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:23:37.617416   48520 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:23:37.621349   48520 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:23:37.621511   48520 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:23:37.621626   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:37.621687   48520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:23:37.643465   48520 command_runner.go:130] > {
	I1205 06:23:37.643493   48520 command_runner.go:130] >   "images":  [
	I1205 06:23:37.643498   48520 command_runner.go:130] >     {
	I1205 06:23:37.643515   48520 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:23:37.643522   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643527   48520 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:23:37.643531   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643535   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643540   48520 command_runner.go:130] >       "size":  "8032639",
	I1205 06:23:37.643545   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643549   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643552   48520 command_runner.go:130] >     },
	I1205 06:23:37.643566   48520 command_runner.go:130] >     {
	I1205 06:23:37.643574   48520 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:23:37.643578   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643583   48520 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:23:37.643586   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643591   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643597   48520 command_runner.go:130] >       "size":  "21166088",
	I1205 06:23:37.643601   48520 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:23:37.643605   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643608   48520 command_runner.go:130] >     },
	I1205 06:23:37.643611   48520 command_runner.go:130] >     {
	I1205 06:23:37.643618   48520 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:23:37.643622   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643627   48520 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:23:37.643630   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643634   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643638   48520 command_runner.go:130] >       "size":  "21134420",
	I1205 06:23:37.643642   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643645   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643648   48520 command_runner.go:130] >       },
	I1205 06:23:37.643652   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643656   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643660   48520 command_runner.go:130] >     },
	I1205 06:23:37.643663   48520 command_runner.go:130] >     {
	I1205 06:23:37.643670   48520 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:23:37.643674   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643687   48520 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:23:37.643693   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643698   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643703   48520 command_runner.go:130] >       "size":  "24676285",
	I1205 06:23:37.643707   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643715   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643719   48520 command_runner.go:130] >       },
	I1205 06:23:37.643727   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643734   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643737   48520 command_runner.go:130] >     },
	I1205 06:23:37.643740   48520 command_runner.go:130] >     {
	I1205 06:23:37.643747   48520 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:23:37.643750   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643756   48520 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:23:37.643759   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643763   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643767   48520 command_runner.go:130] >       "size":  "20658969",
	I1205 06:23:37.643771   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643783   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643790   48520 command_runner.go:130] >       },
	I1205 06:23:37.643794   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643798   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643800   48520 command_runner.go:130] >     },
	I1205 06:23:37.643804   48520 command_runner.go:130] >     {
	I1205 06:23:37.643811   48520 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:23:37.643817   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643822   48520 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:23:37.643826   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643830   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643835   48520 command_runner.go:130] >       "size":  "22428165",
	I1205 06:23:37.643840   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643844   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643853   48520 command_runner.go:130] >     },
	I1205 06:23:37.643856   48520 command_runner.go:130] >     {
	I1205 06:23:37.643863   48520 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:23:37.643867   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643873   48520 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:23:37.643878   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643887   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643893   48520 command_runner.go:130] >       "size":  "15389290",
	I1205 06:23:37.643900   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643905   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643908   48520 command_runner.go:130] >       },
	I1205 06:23:37.643911   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643915   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643918   48520 command_runner.go:130] >     },
	I1205 06:23:37.643921   48520 command_runner.go:130] >     {
	I1205 06:23:37.644021   48520 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:23:37.644028   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.644033   48520 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:23:37.644036   48520 command_runner.go:130] >       ],
	I1205 06:23:37.644041   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.644045   48520 command_runner.go:130] >       "size":  "265458",
	I1205 06:23:37.644049   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.644056   48520 command_runner.go:130] >         "value":  "65535"
	I1205 06:23:37.644060   48520 command_runner.go:130] >       },
	I1205 06:23:37.644064   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.644075   48520 command_runner.go:130] >       "pinned":  true
	I1205 06:23:37.644078   48520 command_runner.go:130] >     }
	I1205 06:23:37.644081   48520 command_runner.go:130] >   ]
	I1205 06:23:37.644084   48520 command_runner.go:130] > }
	I1205 06:23:37.646462   48520 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:23:37.646482   48520 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:23:37.646489   48520 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:23:37.646588   48520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:23:37.646657   48520 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:23:37.674707   48520 command_runner.go:130] > {
	I1205 06:23:37.674726   48520 command_runner.go:130] >   "cniconfig": {
	I1205 06:23:37.674732   48520 command_runner.go:130] >     "Networks": [
	I1205 06:23:37.674735   48520 command_runner.go:130] >       {
	I1205 06:23:37.674741   48520 command_runner.go:130] >         "Config": {
	I1205 06:23:37.674745   48520 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1205 06:23:37.674752   48520 command_runner.go:130] >           "Name": "cni-loopback",
	I1205 06:23:37.674757   48520 command_runner.go:130] >           "Plugins": [
	I1205 06:23:37.674761   48520 command_runner.go:130] >             {
	I1205 06:23:37.674765   48520 command_runner.go:130] >               "Network": {
	I1205 06:23:37.674769   48520 command_runner.go:130] >                 "ipam": {},
	I1205 06:23:37.674775   48520 command_runner.go:130] >                 "type": "loopback"
	I1205 06:23:37.674779   48520 command_runner.go:130] >               },
	I1205 06:23:37.674785   48520 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1205 06:23:37.674788   48520 command_runner.go:130] >             }
	I1205 06:23:37.674792   48520 command_runner.go:130] >           ],
	I1205 06:23:37.674802   48520 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1205 06:23:37.674806   48520 command_runner.go:130] >         },
	I1205 06:23:37.674813   48520 command_runner.go:130] >         "IFName": "lo"
	I1205 06:23:37.674816   48520 command_runner.go:130] >       }
	I1205 06:23:37.674820   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674825   48520 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1205 06:23:37.674829   48520 command_runner.go:130] >     "PluginDirs": [
	I1205 06:23:37.674832   48520 command_runner.go:130] >       "/opt/cni/bin"
	I1205 06:23:37.674836   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674840   48520 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1205 06:23:37.674844   48520 command_runner.go:130] >     "Prefix": "eth"
	I1205 06:23:37.674846   48520 command_runner.go:130] >   },
	I1205 06:23:37.674850   48520 command_runner.go:130] >   "config": {
	I1205 06:23:37.674854   48520 command_runner.go:130] >     "cdiSpecDirs": [
	I1205 06:23:37.674858   48520 command_runner.go:130] >       "/etc/cdi",
	I1205 06:23:37.674862   48520 command_runner.go:130] >       "/var/run/cdi"
	I1205 06:23:37.674871   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674875   48520 command_runner.go:130] >     "cni": {
	I1205 06:23:37.674879   48520 command_runner.go:130] >       "binDir": "",
	I1205 06:23:37.674883   48520 command_runner.go:130] >       "binDirs": [
	I1205 06:23:37.674888   48520 command_runner.go:130] >         "/opt/cni/bin"
	I1205 06:23:37.674891   48520 command_runner.go:130] >       ],
	I1205 06:23:37.674895   48520 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1205 06:23:37.674899   48520 command_runner.go:130] >       "confTemplate": "",
	I1205 06:23:37.674903   48520 command_runner.go:130] >       "ipPref": "",
	I1205 06:23:37.674907   48520 command_runner.go:130] >       "maxConfNum": 1,
	I1205 06:23:37.674911   48520 command_runner.go:130] >       "setupSerially": false,
	I1205 06:23:37.674916   48520 command_runner.go:130] >       "useInternalLoopback": false
	I1205 06:23:37.674919   48520 command_runner.go:130] >     },
	I1205 06:23:37.674927   48520 command_runner.go:130] >     "containerd": {
	I1205 06:23:37.674932   48520 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1205 06:23:37.674937   48520 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1205 06:23:37.674942   48520 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1205 06:23:37.674946   48520 command_runner.go:130] >       "runtimes": {
	I1205 06:23:37.674950   48520 command_runner.go:130] >         "runc": {
	I1205 06:23:37.674955   48520 command_runner.go:130] >           "ContainerAnnotations": null,
	I1205 06:23:37.674959   48520 command_runner.go:130] >           "PodAnnotations": null,
	I1205 06:23:37.674965   48520 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1205 06:23:37.674969   48520 command_runner.go:130] >           "cgroupWritable": false,
	I1205 06:23:37.674974   48520 command_runner.go:130] >           "cniConfDir": "",
	I1205 06:23:37.674978   48520 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1205 06:23:37.674982   48520 command_runner.go:130] >           "io_type": "",
	I1205 06:23:37.674986   48520 command_runner.go:130] >           "options": {
	I1205 06:23:37.674990   48520 command_runner.go:130] >             "BinaryName": "",
	I1205 06:23:37.674994   48520 command_runner.go:130] >             "CriuImagePath": "",
	I1205 06:23:37.674998   48520 command_runner.go:130] >             "CriuWorkPath": "",
	I1205 06:23:37.675002   48520 command_runner.go:130] >             "IoGid": 0,
	I1205 06:23:37.675006   48520 command_runner.go:130] >             "IoUid": 0,
	I1205 06:23:37.675011   48520 command_runner.go:130] >             "NoNewKeyring": false,
	I1205 06:23:37.675018   48520 command_runner.go:130] >             "Root": "",
	I1205 06:23:37.675022   48520 command_runner.go:130] >             "ShimCgroup": "",
	I1205 06:23:37.675026   48520 command_runner.go:130] >             "SystemdCgroup": false
	I1205 06:23:37.675030   48520 command_runner.go:130] >           },
	I1205 06:23:37.675035   48520 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1205 06:23:37.675042   48520 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1205 06:23:37.675046   48520 command_runner.go:130] >           "runtimePath": "",
	I1205 06:23:37.675051   48520 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1205 06:23:37.675055   48520 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1205 06:23:37.675059   48520 command_runner.go:130] >           "snapshotter": ""
	I1205 06:23:37.675062   48520 command_runner.go:130] >         }
	I1205 06:23:37.675065   48520 command_runner.go:130] >       }
	I1205 06:23:37.675068   48520 command_runner.go:130] >     },
	I1205 06:23:37.675077   48520 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1205 06:23:37.675082   48520 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1205 06:23:37.675087   48520 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1205 06:23:37.675091   48520 command_runner.go:130] >     "disableApparmor": false,
	I1205 06:23:37.675096   48520 command_runner.go:130] >     "disableHugetlbController": true,
	I1205 06:23:37.675100   48520 command_runner.go:130] >     "disableProcMount": false,
	I1205 06:23:37.675104   48520 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1205 06:23:37.675108   48520 command_runner.go:130] >     "enableCDI": true,
	I1205 06:23:37.675112   48520 command_runner.go:130] >     "enableSelinux": false,
	I1205 06:23:37.675117   48520 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1205 06:23:37.675121   48520 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1205 06:23:37.675126   48520 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1205 06:23:37.675131   48520 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1205 06:23:37.675135   48520 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1205 06:23:37.675139   48520 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1205 06:23:37.675144   48520 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1205 06:23:37.675150   48520 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675154   48520 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1205 06:23:37.675159   48520 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675164   48520 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1205 06:23:37.675172   48520 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1205 06:23:37.675176   48520 command_runner.go:130] >   },
	I1205 06:23:37.675179   48520 command_runner.go:130] >   "features": {
	I1205 06:23:37.675184   48520 command_runner.go:130] >     "supplemental_groups_policy": true
	I1205 06:23:37.675187   48520 command_runner.go:130] >   },
	I1205 06:23:37.675190   48520 command_runner.go:130] >   "golang": "go1.24.9",
	I1205 06:23:37.675201   48520 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675211   48520 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675215   48520 command_runner.go:130] >   "runtimeHandlers": [
	I1205 06:23:37.675218   48520 command_runner.go:130] >     {
	I1205 06:23:37.675222   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675227   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675231   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675234   48520 command_runner.go:130] >       }
	I1205 06:23:37.675237   48520 command_runner.go:130] >     },
	I1205 06:23:37.675240   48520 command_runner.go:130] >     {
	I1205 06:23:37.675244   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675249   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675253   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675257   48520 command_runner.go:130] >       },
	I1205 06:23:37.675261   48520 command_runner.go:130] >       "name": "runc"
	I1205 06:23:37.675264   48520 command_runner.go:130] >     }
	I1205 06:23:37.675267   48520 command_runner.go:130] >   ],
	I1205 06:23:37.675270   48520 command_runner.go:130] >   "status": {
	I1205 06:23:37.675273   48520 command_runner.go:130] >     "conditions": [
	I1205 06:23:37.675277   48520 command_runner.go:130] >       {
	I1205 06:23:37.675280   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675284   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675288   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675292   48520 command_runner.go:130] >         "type": "RuntimeReady"
	I1205 06:23:37.675295   48520 command_runner.go:130] >       },
	I1205 06:23:37.675298   48520 command_runner.go:130] >       {
	I1205 06:23:37.675304   48520 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1205 06:23:37.675312   48520 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1205 06:23:37.675316   48520 command_runner.go:130] >         "status": false,
	I1205 06:23:37.675320   48520 command_runner.go:130] >         "type": "NetworkReady"
	I1205 06:23:37.675323   48520 command_runner.go:130] >       },
	I1205 06:23:37.675326   48520 command_runner.go:130] >       {
	I1205 06:23:37.675330   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675334   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675338   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675343   48520 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1205 06:23:37.675347   48520 command_runner.go:130] >       }
	I1205 06:23:37.675350   48520 command_runner.go:130] >     ]
	I1205 06:23:37.675353   48520 command_runner.go:130] >   }
	I1205 06:23:37.675356   48520 command_runner.go:130] > }
	I1205 06:23:37.675685   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:37.675695   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:37.675709   48520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:23:37.675732   48520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:23:37.675850   48520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:23:37.675917   48520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:23:37.682806   48520 command_runner.go:130] > kubeadm
	I1205 06:23:37.682826   48520 command_runner.go:130] > kubectl
	I1205 06:23:37.682831   48520 command_runner.go:130] > kubelet
	I1205 06:23:37.683692   48520 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:23:37.683790   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:23:37.691316   48520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:23:37.703871   48520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:23:37.716284   48520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 06:23:37.728952   48520 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:23:37.732950   48520 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:23:37.733083   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.845498   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:37.867115   48520 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:23:37.867139   48520 certs.go:195] generating shared ca certs ...
	I1205 06:23:37.867158   48520 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:37.867407   48520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:23:37.867492   48520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:23:37.867536   48520 certs.go:257] generating profile certs ...
	I1205 06:23:37.867696   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:23:37.867788   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:23:37.867863   48520 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:23:37.867878   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:23:37.867909   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:23:37.867937   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:23:37.867957   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:23:37.867990   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:23:37.868021   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:23:37.868041   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:23:37.868082   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:23:37.868158   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:23:37.868216   48520 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:23:37.868231   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:23:37.868276   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:23:37.868325   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:23:37.868373   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:23:37.868453   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:37.868510   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:37.868541   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem -> /usr/share/ca-certificates/4192.pem
	I1205 06:23:37.868568   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /usr/share/ca-certificates/41922.pem
	I1205 06:23:37.869214   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:23:37.888705   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:23:37.907292   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:23:37.928487   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:23:37.946435   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:23:37.964299   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:23:37.982113   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:23:37.999555   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:23:38.025054   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:23:38.044579   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:23:38.064934   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:23:38.085119   48520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:23:38.098666   48520 ssh_runner.go:195] Run: openssl version
	I1205 06:23:38.104661   48520 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:23:38.105114   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.112530   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:23:38.119940   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123892   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123985   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.124059   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.164658   48520 command_runner.go:130] > 51391683
	I1205 06:23:38.165135   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:23:38.172385   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.179652   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:23:38.187250   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190908   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190946   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190996   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.231356   48520 command_runner.go:130] > 3ec20f2e
	I1205 06:23:38.231428   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:23:38.238676   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.245835   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:23:38.252946   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256642   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256892   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256951   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.296975   48520 command_runner.go:130] > b5213941
	I1205 06:23:38.297434   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:23:38.304845   48520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308564   48520 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308587   48520 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:23:38.308594   48520 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1205 06:23:38.308601   48520 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:38.308607   48520 command_runner.go:130] > Access: 2025-12-05 06:19:31.018816392 +0000
	I1205 06:23:38.308612   48520 command_runner.go:130] > Modify: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308618   48520 command_runner.go:130] > Change: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308623   48520 command_runner.go:130] >  Birth: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308692   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:23:38.348984   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.349475   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:23:38.394714   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.395243   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:23:38.435818   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.436261   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:23:38.476805   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.477267   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:23:38.518071   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.518611   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:23:38.561014   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.561491   48520 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:38.561574   48520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:23:38.561660   48520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:23:38.588277   48520 cri.go:89] found id: ""
	I1205 06:23:38.588366   48520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:23:38.596406   48520 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:23:38.596430   48520 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:23:38.596438   48520 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:23:38.597543   48520 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:23:38.597605   48520 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:23:38.597685   48520 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:23:38.607655   48520 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:23:38.608093   48520 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.608241   48520 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "functional-101526" cluster setting kubeconfig missing "functional-101526" context setting]
	I1205 06:23:38.608622   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.609091   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.609324   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.609886   48520 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:23:38.610063   48520 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:23:38.610057   48520 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:23:38.610120   48520 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:23:38.610139   48520 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:23:38.610175   48520 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:23:38.610495   48520 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:23:38.619299   48520 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:23:38.619367   48520 kubeadm.go:602] duration metric: took 21.74243ms to restartPrimaryControlPlane
	I1205 06:23:38.619392   48520 kubeadm.go:403] duration metric: took 57.910865ms to StartCluster
	I1205 06:23:38.619420   48520 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.619502   48520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.620189   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.620458   48520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 06:23:38.620608   48520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:23:38.620940   48520 addons.go:70] Setting storage-provisioner=true in profile "functional-101526"
	I1205 06:23:38.621064   48520 addons.go:239] Setting addon storage-provisioner=true in "functional-101526"
	I1205 06:23:38.621113   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.620703   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:38.621254   48520 addons.go:70] Setting default-storageclass=true in profile "functional-101526"
	I1205 06:23:38.621267   48520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-101526"
	I1205 06:23:38.621543   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.621837   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.622827   48520 out.go:179] * Verifying Kubernetes components...
	I1205 06:23:38.624023   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:38.667927   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.668094   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.668372   48520 addons.go:239] Setting addon default-storageclass=true in "functional-101526"
	I1205 06:23:38.668400   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.668811   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.682967   48520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:23:38.684152   48520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.684170   48520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:23:38.684236   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.712186   48520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:38.712208   48520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:23:38.712271   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.728758   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.759681   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.830869   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:38.880502   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.894150   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.597389   48520 node_ready.go:35] waiting up to 6m0s for node "functional-101526" to be "Ready" ...
	I1205 06:23:39.597462   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597505   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597540   48520 retry.go:31] will retry after 347.041569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597590   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597614   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597624   48520 retry.go:31] will retry after 291.359395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:23:39.597730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:39.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:39.889264   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.945727   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:39.950448   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.950487   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.950523   48520 retry.go:31] will retry after 542.352885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018611   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.018720   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018748   48520 retry.go:31] will retry after 498.666832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.098033   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.098325   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.493962   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:40.518418   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:40.562108   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.562226   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.562260   48520 retry.go:31] will retry after 406.138698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588025   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.588062   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588081   48520 retry.go:31] will retry after 594.532888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.598248   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.598327   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.598636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.969306   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.034172   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.037396   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.037482   48520 retry.go:31] will retry after 875.411269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.098568   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.098689   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.098986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:41.183391   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:41.246665   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.246713   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.246732   48520 retry.go:31] will retry after 928.241992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.598231   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.598321   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:41.598695   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:41.913216   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.971936   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.975346   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.975382   48520 retry.go:31] will retry after 1.177811903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:42.175570   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:42.247042   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:42.247165   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.247197   48520 retry.go:31] will retry after 1.26909991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.598419   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.598544   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.598893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.097717   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.098051   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.154349   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:43.214165   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.217853   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.217885   48520 retry.go:31] will retry after 2.752289429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.517328   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:43.580346   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.580405   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.580434   48520 retry.go:31] will retry after 2.299289211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.598503   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.598628   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.598995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:43.599083   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:44.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.098502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.098803   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:44.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.597856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.097813   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.097918   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.098342   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.597661   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.880606   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:45.938914   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:45.938948   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.938966   48520 retry.go:31] will retry after 2.215203034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.971116   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:46.035840   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:46.035877   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.035895   48520 retry.go:31] will retry after 2.493998942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.098074   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.098239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.098559   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:46.098611   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:46.598405   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.598501   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.598815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.098358   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.098432   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.098766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.598407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.598667   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:48.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.098899   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:48.098950   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:48.155209   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:48.214464   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.214512   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.214531   48520 retry.go:31] will retry after 5.617095307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.530967   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:48.587770   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.587811   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.587831   48520 retry.go:31] will retry after 3.714896929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.598174   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.598240   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.598490   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.098439   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.098511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.597635   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.597708   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.097641   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.098020   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.598128   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:50.598177   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:51.097653   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.097726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:51.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.598434   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.598708   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.098476   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.098552   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.098854   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.303312   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:52.364380   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:52.367543   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.367573   48520 retry.go:31] will retry after 3.56011918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.597990   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.598059   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.598330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:52.598370   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:53.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.097720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.097995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.598131   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.832691   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:53.932471   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:53.935567   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:53.935601   48520 retry.go:31] will retry after 7.968340753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:54.098032   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.098119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.098504   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:54.598332   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.598408   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.598700   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:54.598750   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:55.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.098636   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.598452   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.598735   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.928461   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:55.985797   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:55.985849   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:55.985868   48520 retry.go:31] will retry after 13.95380646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:56.098043   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.098142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:56.598257   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.598332   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.598591   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:57.098338   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.098418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:57.098806   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:57.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.598727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.098565   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.098653   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.098993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.597995   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.598071   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.598388   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:59.598441   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:00.097798   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.097895   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.098216   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:00.598109   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.598187   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.598469   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.098232   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.098656   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.598378   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.598756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:01.598798   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:01.904244   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:01.963282   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:01.966528   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:01.966559   48520 retry.go:31] will retry after 12.949527151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:02.097647   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.098069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:02.597723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.597819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.598178   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.097745   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.098222   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.597893   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.597959   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.598249   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:04.097760   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.098267   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:04.098317   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:04.598025   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.598124   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.598425   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.098484   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.098557   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.098824   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.598589   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.598684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.599025   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.098166   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.597592   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.597662   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.597933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:06.597973   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:07.098457   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.098530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.098893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:07.598367   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.598458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.598757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.098344   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.098429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.098757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.598492   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.598559   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.598841   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:08.598881   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:09.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.097973   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.098345   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.598107   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.598174   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.598441   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.939938   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:09.995364   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:09.998554   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:09.998588   48520 retry.go:31] will retry after 16.114489594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:10.097931   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.098044   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.098385   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:10.598110   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.598191   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.598513   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:11.098275   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.098615   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:11.098670   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:11.598400   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.598740   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.098540   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.098616   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.097746   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.097819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.098163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.597759   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.597834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.598122   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:13.598175   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:14.097627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.097709   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.597953   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.598020   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.598324   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.916824   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:14.975576   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:14.975628   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:14.975646   48520 retry.go:31] will retry after 12.242306889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:15.097909   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.098005   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.098359   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:15.597934   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.598277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:15.598320   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:16.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:16.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.597791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.598100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.098010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.597774   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.597845   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.598218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:18.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:18.098183   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:18.598335   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.598405   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.598680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.098583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.098655   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.597882   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.597965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.598257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:20.097767   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.097837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.098151   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:20.098210   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:20.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.597821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.598163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.097868   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.097944   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.597748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:22.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.097863   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:22.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:22.597927   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.598018   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.097757   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.097834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.098165   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.598412   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:24.598451   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:25.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.097818   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.098201   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:25.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.598206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.097703   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.114242   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:26.182245   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:26.182291   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.182309   48520 retry.go:31] will retry after 20.133806896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.597729   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.597815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:27.097723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:27.098168   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:27.218635   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:27.278311   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:27.278351   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.278369   48520 retry.go:31] will retry after 29.943294063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.597675   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.597766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.598047   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.097690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.098089   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.597760   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.598077   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.597938   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.598028   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.598339   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:29.598384   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:30.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:30.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.097803   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.098330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.597811   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.598159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:32.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:32.098247   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:32.598587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.598658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.097615   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.097683   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.098041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.598348   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.598685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:34.098505   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.098598   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.098917   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:34.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:34.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.598097   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.098294   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.598401   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.598478   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.598810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:36.098627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.098700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.099015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:36.099064   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:36.597658   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.598106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.098117   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.598093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.098206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.597747   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:38.598117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:39.097836   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.097928   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.098334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:39.598071   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.598143   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.598413   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.098336   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.098679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.598808   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:40.598849   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:41.098353   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.098417   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.098669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:41.598525   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.598609   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.597659   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:43.097673   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.098074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:43.098136   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:43.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.597761   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.098370   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.098629   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.598627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.598699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.599010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:45.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.097907   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:45.098408   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:45.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.597740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.098586   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.098659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.098977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.316378   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:46.382136   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:46.385605   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.385642   48520 retry.go:31] will retry after 25.45198813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.598118   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.598219   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.598522   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:47.098288   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.098354   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.098627   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:47.098682   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:47.598404   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.598746   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.098648   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.099013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.598372   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.598439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.598709   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:49.098599   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.099061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:49.099113   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.598014   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.598306   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.097691   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.598564   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.598829   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.097583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.097659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.098037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.598325   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.598399   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:51.598761   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:52.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.098621   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.098978   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.597773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.097590   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.097657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.097905   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.597594   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.597666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.597973   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:54.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:54.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:54.597977   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.598054   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.598305   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.097821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.598396   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.598475   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:56.098321   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.098407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.098685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:56.098727   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:56.598502   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.598876   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.097587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.097675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.097966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.222289   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:57.284849   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:57.284890   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.284910   48520 retry.go:31] will retry after 41.469992375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.598343   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.598669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:58.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.098574   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.098880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:58.098930   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:58.597606   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.597675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.098608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.098916   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.597662   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.097620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.097697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.598474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:00.598791   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:01.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.099039   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:01.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.597775   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.598053   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.098050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:03.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.097804   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.098169   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:03.098231   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:03.597623   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.597691   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.097739   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.098119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.597929   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.598003   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:05.098361   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.098426   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:05.098730   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:05.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.598783   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.098625   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.098705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.099060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.598425   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.598694   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:07.098518   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:07.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:07.597640   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.598023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.097648   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.098028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.597762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.097853   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.598150   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.598411   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:09.598454   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:10.097719   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:10.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.598121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.097959   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.838548   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:25:11.913959   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914006   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914113   48520 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:12.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.098446   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.098756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:12.098805   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:12.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.598398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.598661   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.098442   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.098525   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.598638   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.599017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.097669   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.098009   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.597973   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.598377   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:14.598425   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:15.098092   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.098173   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.098548   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:15.598315   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.598383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.598676   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.098414   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.098500   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.098815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.598530   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.598606   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.598956   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:16.599009   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:17.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.097731   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:17.597681   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.097848   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.098264   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.597980   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.598079   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.598336   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:19.098419   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.098509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:19.098915   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:19.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.097977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.597733   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.097758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.098096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.598679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:21.598737   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:22.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.098935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:22.597673   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.098687   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:23.598843   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:24.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.098699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.099069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:24.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.597963   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.598230   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.097788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.098109   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.597831   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.597926   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:26.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.097972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:26.098033   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:26.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.097823   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.097896   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.597972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:28.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.098036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:28.098084   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:28.597722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.598154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.097749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.098021   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.597987   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.598315   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:30.098008   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.098085   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.098479   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:30.098542   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:30.598023   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.598099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.598365   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.097739   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.098082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.597660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.598050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.097729   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.097985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.597714   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:32.598157   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:33.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:33.597803   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.597872   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.598133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.098121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.598211   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.598290   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.598585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:34.598631   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:35.098390   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.098471   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:35.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.598657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.598992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.097793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.598358   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.598693   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:36.598731   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:37.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.098568   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.098894   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:37.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.598057   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.599817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:25:38.097679   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:38.598262   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.598388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:38.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:38.755357   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:25:38.811504   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811556   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811634   48520 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:38.813895   48520 out.go:179] * Enabled addons: 
	I1205 06:25:38.815272   48520 addons.go:530] duration metric: took 2m0.19467206s for enable addons: enabled=[]
	I1205 06:25:39.097850   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.097947   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.098277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.098242   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.098311   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.098643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.598717   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:41.098378   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.098451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:41.098817   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:41.598539   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.598608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.598921   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.097686   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.597727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.598041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.097789   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.097885   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.098205   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.597913   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.597988   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:43.598385   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:44.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.097735   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.098040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:44.598010   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.598096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.098016   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.098099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.098496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.597755   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.597830   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.598148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:46.097840   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.097939   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.098311   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:46.098366   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:46.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.598111   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.598421   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.098155   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.098226   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.098489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.598715   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:48.098525   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:48.099014   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:48.598319   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.598387   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.598646   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.098618   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.098694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.099074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.597928   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.598344   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.098007   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.098092   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.098397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.598202   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.598496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:50.598545   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:51.098285   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.098357   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:51.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.098404   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.098477   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.098809   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.598593   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.598670   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.598948   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:52.598996   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:53.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.097712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:53.597701   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.597798   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.598156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.097890   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.097965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.098294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.598153   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.598231   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.598502   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:55.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.098413   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.098774   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:55.098829   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:55.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.598649   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.598924   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.098373   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.098641   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.598457   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.598530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:57.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.098633   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:57.098974   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:57.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.598416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.598776   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.098490   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.098937   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.598427   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.598511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.598848   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.597568   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.597645   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.597976   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:59.598030   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:00.098475   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.098940   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:00.598316   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.598385   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.598643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.098402   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.098479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.098749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:01.598947   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:02.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.098398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.098727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:02.598537   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.598620   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.598964   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.098104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.598364   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.598437   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.598722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:04.098558   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.098639   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.099052   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:04.099124   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:04.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.597957   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.097925   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.097993   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.597982   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.598052   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.598387   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.098263   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.598455   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.598714   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:06.598754   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:07.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.098585   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.098898   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:07.597622   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.097980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.597757   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.598102   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:09.097870   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.097951   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.098248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:09.098294   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:09.598112   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.598239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.598574   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.098411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.098493   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.597577   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.597650   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.597981   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:11.098430   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.098504   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:11.098808   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:11.598510   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.598593   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.598863   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.097604   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.097676   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.097998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.597698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.098093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.597735   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.597806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:13.598192   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:14.098341   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.098414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:14.597561   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.597634   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.597953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.097674   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.597858   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.597937   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:15.598252   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:16.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.098136   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:16.597837   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.598258   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.097730   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.097795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:18.097743   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.097833   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:18.098240   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:18.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.598040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.097951   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.098029   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.598489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:20.098213   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.098283   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.098535   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:20.098577   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:20.598411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.598481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.098642   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.098953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.598371   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.598445   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:22.098546   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.098626   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.098949   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:22.099002   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:22.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.597742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.598072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.098292   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.098363   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.098623   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.598300   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.598378   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.598681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.098570   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.098890   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.597874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.597942   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.598193   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:24.598235   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:25.097954   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.098049   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.098380   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:25.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.098335   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.098599   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:26.598819   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:27.098598   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.098666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.098997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:27.598342   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.598674   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.098464   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.098548   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.098911   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.598054   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:29.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:29.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:29.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.598512   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.098722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.598362   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.598429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.598778   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:31.098515   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.098594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.098941   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:31.099003   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:31.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.598069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.097668   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.097740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.597672   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.598073   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.098170   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.598322   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.598390   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:33.598681   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:34.098437   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.098514   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.098910   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:34.597764   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.598152   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.097815   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.097898   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.598058   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:36.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.098272   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:36.098329   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:36.597583   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.597647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.597901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.097624   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.597700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.097738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.597805   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.598144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:38.598199   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:39.097874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.097953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.098299   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.598120   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.598381   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.598235   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:40.598299   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:41.097613   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.097684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.097934   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:41.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.598095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.097741   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.098252   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.597939   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.598259   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:43.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.098098   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:43.098152   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:43.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.597750   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.097834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.598119   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.598510   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:45.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:45.098935   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:45.598338   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.598404   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.598666   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.098497   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.098980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.597691   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.098061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.598104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:47.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:48.097827   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:48.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.597728   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.597996   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.597943   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.598016   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.598353   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:49.598407   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:50.097817   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.097883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:50.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.597641   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.597986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:52.097689   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.098130   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:52.098200   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.598147   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.097621   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.097992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:54.097842   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.097924   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:54.098348   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:54.598061   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.598132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.097700   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.598059   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:56.098320   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.098388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.098645   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:56.098686   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:56.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.598594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.598880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.097600   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.097674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.097997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:58.098426   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.098498   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.098810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:58.098866   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:58.597573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.597644   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.597980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.098351   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.098416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.098680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.598057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:00.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.099364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:27:00.099443   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:00.598194   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.598268   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.598536   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.098258   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.098330   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.598444   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.598519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.098423   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.098519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.098885   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.597608   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.597679   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.598006   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:02.598063   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:03.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:03.597631   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.598018   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.098154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:04.598512   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:05.098189   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.098594   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:05.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.598766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.098531   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.098612   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.598739   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:06.598794   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:07.098498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.098896   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:07.598506   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.598576   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.598842   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.098375   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.098481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.098796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.598436   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.598506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.598870   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:08.598917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:09.098638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.098713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.099031   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:09.598009   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.598075   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.098104   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.098192   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.098576   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.598351   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.598430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.598734   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:11.098387   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.098458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.098711   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:11.098748   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:11.598498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.598845   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.097611   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.098027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.597725   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.597777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.598079   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:13.598137   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:14.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:14.598045   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.598450   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.098264   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.098347   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.598409   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.598702   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:15.598770   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:16.098516   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.098587   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.098908   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:16.598224   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.598302   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.598621   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.098312   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.098380   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.598428   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.598505   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.598807   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:17.598861   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:18.098642   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.098716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.099040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:18.597649   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.597719   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.598036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.097919   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.097999   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.098304   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.598170   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:20.098308   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:20.098698   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:20.598481   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.598549   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.097999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.597636   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.097711   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.098159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.597596   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.597665   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.597997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:22.598069   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:23.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.097723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.098062   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.598043   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.097740   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.097814   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.598060   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.598131   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:24.598468   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:25.098259   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.098337   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.098681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:25.598479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.598553   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.598817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.098403   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.598420   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.598491   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:26.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:27.098591   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.098669   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:27.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.597699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.097670   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.098120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.597838   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.597913   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.598248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:29.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.097724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.097974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:29.098015   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:29.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.598034   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.097797   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.098177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.597863   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.597934   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.598220   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:31.097682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.098095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:31.098146   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:31.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.597758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.598081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.097685   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.597761   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:33.097892   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.097972   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.098291   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:33.098349   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:33.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.598028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.598024   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.598102   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:35.098221   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.098585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:35.098636   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:35.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.598796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.098479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.098560   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.098901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.598360   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.598431   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:37.098449   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:37.098917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:37.598548   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.598615   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.598889   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.100439   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.100533   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.100821   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.598323   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.598665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:39.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.098663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.099057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:39.099121   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:39.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.098100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.597799   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.597871   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.598186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.097664   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.097730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.097993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.597682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:41.598120   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:42.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.098183   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:42.597650   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.098084   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.598402   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.598480   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.598777   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:43.598825   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:44.098381   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.098721   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:44.597605   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.597678   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.097828   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.098242   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.597563   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.597635   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.597935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:46.097599   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.097672   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.097994   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:46.098054   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:46.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.098397   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.098474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.098743   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.598385   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.598785   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:48.098501   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.098580   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.098912   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:48.098971   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:48.598554   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.598624   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.598891   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.097780   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.098243   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.598130   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.598205   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.098061   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.098132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.098478   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.598272   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.598348   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:50.598692   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:51.098399   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.098484   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.098838   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:51.598356   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.598698   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.098686   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.098782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.099141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.597853   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.597931   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.598221   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:53.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.098015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:53.597727   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.597807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.097889   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.097964   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.598052   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.598384   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:55.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.098128   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.098471   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:55.098525   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:55.598047   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.598443   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.098223   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.098308   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.098582   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.598347   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.598418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.598724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:57.098522   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.098600   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.098946   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:57.099013   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:57.597645   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.597724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.598038   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.097753   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.098035   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.598043   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.598109   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.598433   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:59.598492   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:00.098337   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.098427   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.098788   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:00.598433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.098654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.098740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.099090   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:02.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.097737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.098064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:02.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:02.597688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.597667   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.597737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.597990   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.098055   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.597984   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.598390   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:04.598444   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:05.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:05.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.597765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:07.097703   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:07.098142   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:07.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.597911   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.598223   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.098023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.597760   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.598171   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:09.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.098013   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.098328   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:09.098389   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:09.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.598116   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.598364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.097762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.098113   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.597798   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.597870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.598188   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.097802   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.597864   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.598173   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:11.598226   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:12.097903   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.097983   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.098374   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:12.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.098032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:14.097811   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.097887   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.098156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:14.098195   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:14.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.598112   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.598466   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.097733   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.098140   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.597813   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.597884   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.598142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.097796   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.598101   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:16.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:17.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.097876   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.098194   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:17.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.597756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.098215   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.597886   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.597953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:18.598246   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:19.098191   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.098596   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:19.598092   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.598164   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.598453   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.098276   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.098366   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.098647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.598966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:20.599023   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:21.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.097783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:21.597816   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.597890   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.598175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.097699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.097778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.597810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.597883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:23.097913   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.097992   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.098257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:23.098301   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:23.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.097782   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.598076   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.598142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.598394   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:25.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.098130   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.098501   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:25.098557   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:25.598048   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.598119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.598461   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.098278   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.098345   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.098636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.598407   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.598479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:27.098588   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.098668   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.099022   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:27.099091   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:27.598346   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.598675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.098428   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.098506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.098818   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.598580   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.598652   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.598974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.097745   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.598001   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.598100   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.598428   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:29.598481   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:30.098006   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.098087   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:30.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.598160   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.097774   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.098181   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.597850   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.597930   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.598261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:32.097657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.097732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.098067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:32.098128   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:32.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.597865   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.598198   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.097897   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.097968   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.098282   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.597749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.597992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.597944   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.598021   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.598350   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:34.598404   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:35.097649   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:35.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.598762   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.098647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.098983   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.598412   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.598488   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.598831   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:36.598888   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:37.098658   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.098727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.099076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:37.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.598120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.097852   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.098158   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.597763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.598087   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:39.097696   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.097815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:39.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:39.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.598118   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.598367   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.097715   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.098133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.597711   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.597788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.598088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.097630   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.097700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:41.598088   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:42.097792   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.098293   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:42.597741   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.097766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:43.598827   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:44.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.098438   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:44.598631   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.598985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.097712   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.097807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.098219   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.597940   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.598275   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:46.097986   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.098060   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.098414   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:46.098473   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:46.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.598322   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.098433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.098851   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.598595   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.598663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.598967   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.097777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.098143   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.598005   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:48.598051   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:49.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.097752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.098085   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.598007   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.598332   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.097650   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.097722   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.098001   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.598180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:50.598236   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:51.097912   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.097985   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.098261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:51.597646   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.597720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.598030   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.097764   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:53.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.097704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:53.597719   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.597789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.098214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.597970   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.598039   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:55.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.098096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:55.098479   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:55.598243   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.598312   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.598632   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.098392   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.598446   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.598523   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.598834   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:57.098623   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.098697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.099008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:57.099062   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:57.597638   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.597977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.597925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.598287   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.098257   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.098326   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.098588   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.598617   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.598687   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.598989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:59.599048   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:00.097762   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.098260   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:00.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.098107   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.597703   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:02.097797   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.098139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:02.098179   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:02.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.597901   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.598200   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.097768   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.597806   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.597879   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.598126   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.597968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.598041   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.598369   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:04.598426   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:05.097837   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.097910   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.098172   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:05.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.597991   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.598317   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.597807   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.597874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.598213   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:07.097658   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.097727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:07.098053   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:07.597689   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.598111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.098144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.597825   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.597894   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.598214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:09.098293   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.098665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:09.098713   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:09.598091   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.598166   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.598438   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.098197   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.098285   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.598426   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.598502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.598789   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.098356   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.098424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.598534   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.598933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:11.598983   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:12.097671   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.097742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.098072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:12.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.598000   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.597778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:14.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.098625   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:14.098664   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:14.598601   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.598674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.598962   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.597712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.597989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:16.598176   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:17.097800   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.098186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:17.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.598071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:19.097692   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.098110   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:19.098171   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:19.597903   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.597976   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.598294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.097887   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.097966   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.098232   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.598138   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:21.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.097904   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.098238   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:21.098290   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:21.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.597989   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.598308   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.097687   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.098076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.597793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.097639   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.098012   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.598468   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.598537   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.598805   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:23.598850   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:24.098628   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.098711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.099042   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:24.598066   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.598140   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.598436   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.098234   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.098304   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.098635   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.598442   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.598521   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.598813   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:26.098310   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.098634   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:26.098672   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:26.598467   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.598836   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.098604   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.098674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.099002   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.597627   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.597979   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.097729   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.097806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.098118   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:28.598160   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:29.097982   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.098056   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:29.598171   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.598241   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.598550   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.098366   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.098794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.598344   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.598414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.598658   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:30.598696   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:31.098524   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.098930   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:31.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.598056   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.598096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:33.097781   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.097856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.098197   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:33.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:33.598550   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.598869   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.097577   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.097658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.097965   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.597894   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.597969   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:35.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.098046   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.098335   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:35.098382   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:35.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.597795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.598375   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.597753   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.597820   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.598067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.597846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.598191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:37.598248   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:38.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.097981   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.098280   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:38.597967   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.598047   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.598406   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.098350   48520 type.go:168] "Request Body" body=""
	I1205 06:29:39.098430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:39.098781   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.598588   48520 node_ready.go:38] duration metric: took 6m0.001106708s for node "functional-101526" to be "Ready" ...
	I1205 06:29:39.600415   48520 out.go:203] 
	W1205 06:29:39.601638   48520 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:29:39.601661   48520 out.go:285] * 
	W1205 06:29:39.603936   48520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:29:39.604891   48520 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470691195Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470703101Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470714654Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470726797Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470745218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470756976Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470774921Z" level=info msg="runtime interface created"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470780665Z" level=info msg="created NRI interface"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470789370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.470817637Z" level=info msg="Connect containerd service"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.471185329Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.471704882Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.492418076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.492490028Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.492776463Z" level=info msg="Start subscribing containerd event"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.492877305Z" level=info msg="Start recovering state"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514376443Z" level=info msg="Start event monitor"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514429949Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514446794Z" level=info msg="Start streaming server"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514459767Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514468915Z" level=info msg="runtime interface starting up..."
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514479713Z" level=info msg="starting plugins..."
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.514491184Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:23:37 functional-101526 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 05 06:23:37 functional-101526 containerd[5817]: time="2025-12-05T06:23:37.520933722Z" level=info msg="containerd successfully booted in 0.070389s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:29:44.008557    9154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:44.009558    9154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:44.011329    9154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:44.011940    9154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:44.012971    9154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:29:44 up  1:12,  0 user,  load average: 0.11, 0.24, 0.50
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:29:40 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 05 06:29:41 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:41 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:41 functional-101526 kubelet[8934]: E1205 06:29:41.159852    8934 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 05 06:29:41 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:41 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:41 functional-101526 kubelet[9029]: E1205 06:29:41.905271    9029 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:41 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:42 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 05 06:29:42 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:42 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:42 functional-101526 kubelet[9037]: E1205 06:29:42.648434    9037 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:42 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:42 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:43 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 05 06:29:43 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:43 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:43 functional-101526 kubelet[9071]: E1205 06:29:43.411769    9071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:43 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:43 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (378.00061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 kubectl -- --context functional-101526 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 kubectl -- --context functional-101526 get pods: exit status 1 (114.927229ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-101526 kubectl -- --context functional-101526 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (292.518968ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-226068 image ls --format short --alsologtostderr                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format yaml --alsologtostderr                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format json --alsologtostderr                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format table --alsologtostderr                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ ssh     │ functional-226068 ssh pgrep buildkitd                                                                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ image   │ functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr                                                  │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls                                                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ delete  │ -p functional-226068                                                                                                                                    │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ start   │ -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ start   │ -p functional-101526 --alsologtostderr -v=8                                                                                                             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:latest                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add minikube-local-cache-test:functional-101526                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache delete minikube-local-cache-test:functional-101526                                                                              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl images                                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	│ cache   │ functional-101526 cache reload                                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ kubectl │ functional-101526 kubectl -- --context functional-101526 get pods                                                                                       │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:23:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:23:34.555640   48520 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:23:34.555757   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.555768   48520 out.go:374] Setting ErrFile to fd 2...
	I1205 06:23:34.555773   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.556051   48520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:23:34.556413   48520 out.go:368] Setting JSON to false
	I1205 06:23:34.557238   48520 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3961,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:23:34.557311   48520 start.go:143] virtualization:  
	I1205 06:23:34.559039   48520 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:23:34.560249   48520 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:23:34.560305   48520 notify.go:221] Checking for updates...
	I1205 06:23:34.562854   48520 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:23:34.564039   48520 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:34.565137   48520 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:23:34.566333   48520 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:23:34.567598   48520 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:23:34.569245   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:34.569354   48520 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:23:34.590301   48520 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:23:34.590415   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.653386   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.643338894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.653494   48520 docker.go:319] overlay module found
	I1205 06:23:34.655010   48520 out.go:179] * Using the docker driver based on existing profile
	I1205 06:23:34.656153   48520 start.go:309] selected driver: docker
	I1205 06:23:34.656167   48520 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.656269   48520 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:23:34.656363   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.713521   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.704040472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.713916   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:34.713979   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:34.714025   48520 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.715459   48520 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:23:34.716546   48520 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:23:34.717743   48520 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:23:34.719027   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:34.719180   48520 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:23:34.738218   48520 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:23:34.738240   48520 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:23:34.779237   48520 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:23:34.998431   48520 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:23:34.998624   48520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:23:34.998714   48520 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998796   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:23:34.998805   48520 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.154µs
	I1205 06:23:34.998818   48520 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:23:34.998828   48520 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998857   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:23:34.998862   48520 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.504µs
	I1205 06:23:34.998868   48520 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998878   48520 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998890   48520 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:23:34.998904   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:23:34.998909   48520 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.361µs
	I1205 06:23:34.998916   48520 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998919   48520 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998925   48520 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998953   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:23:34.998955   48520 start.go:364] duration metric: took 23.967µs to acquireMachinesLock for "functional-101526"
	I1205 06:23:34.998958   48520 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.961µs
	I1205 06:23:34.998965   48520 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998968   48520 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:23:34.998973   48520 fix.go:54] fixHost starting: 
	I1205 06:23:34.998973   48520 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999001   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:23:34.999006   48520 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.323µs
	I1205 06:23:34.999012   48520 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:23:34.999020   48520 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999055   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:23:34.999060   48520 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.108µs
	I1205 06:23:34.999066   48520 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:23:34.999076   48520 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999117   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:23:34.999122   48520 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.426µs
	I1205 06:23:34.999127   48520 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:23:34.999135   48520 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999162   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:23:34.999167   48520 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.427µs
	I1205 06:23:34.999172   48520 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:23:34.999180   48520 cache.go:87] Successfully saved all images to host disk.
	I1205 06:23:34.999246   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:35.021908   48520 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:23:35.021948   48520 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:23:35.023534   48520 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:23:35.023573   48520 machine.go:94] provisionDockerMachine start ...
	I1205 06:23:35.023662   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.041007   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.041395   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.041419   48520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:23:35.188597   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.188620   48520 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:23:35.188686   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.205143   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.205585   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.205604   48520 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:23:35.361531   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.361628   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.381210   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.381606   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.381630   48520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:23:35.529415   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:23:35.529441   48520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:23:35.529467   48520 ubuntu.go:190] setting up certificates
	I1205 06:23:35.529477   48520 provision.go:84] configureAuth start
	I1205 06:23:35.529543   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:35.549800   48520 provision.go:143] copyHostCerts
	I1205 06:23:35.549840   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549879   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:23:35.549910   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549992   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:23:35.550081   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550102   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:23:35.550111   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550138   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:23:35.550192   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550212   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:23:35.550220   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550244   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:23:35.550303   48520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:23:35.896062   48520 provision.go:177] copyRemoteCerts
	I1205 06:23:35.896131   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:23:35.896172   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.915295   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.022077   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:23:36.022150   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:23:36.041535   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:23:36.041647   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:23:36.060235   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:23:36.060320   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:23:36.078423   48520 provision.go:87] duration metric: took 548.924199ms to configureAuth
	I1205 06:23:36.078451   48520 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:23:36.078638   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:36.078652   48520 machine.go:97] duration metric: took 1.055064213s to provisionDockerMachine
	I1205 06:23:36.078660   48520 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:23:36.078671   48520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:23:36.078720   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:23:36.078768   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.096049   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.200907   48520 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:23:36.204162   48520 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:23:36.204182   48520 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:23:36.204187   48520 command_runner.go:130] > VERSION_ID="12"
	I1205 06:23:36.204192   48520 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:23:36.204196   48520 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:23:36.204200   48520 command_runner.go:130] > ID=debian
	I1205 06:23:36.204205   48520 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:23:36.204210   48520 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:23:36.204232   48520 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:23:36.204297   48520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:23:36.204316   48520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:23:36.204326   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:23:36.204380   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:23:36.204473   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:23:36.204485   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /etc/ssl/certs/41922.pem
	I1205 06:23:36.204565   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:23:36.204573   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> /etc/test/nested/copy/4192/hosts
	I1205 06:23:36.204620   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:23:36.211988   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:36.229308   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:23:36.246073   48520 start.go:296] duration metric: took 167.399532ms for postStartSetup
	I1205 06:23:36.246163   48520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:23:36.246202   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.262461   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.366102   48520 command_runner.go:130] > 13%
	I1205 06:23:36.366647   48520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:23:36.370745   48520 command_runner.go:130] > 169G
	I1205 06:23:36.371285   48520 fix.go:56] duration metric: took 1.372308275s for fixHost
	I1205 06:23:36.371306   48520 start.go:83] releasing machines lock for "functional-101526", held for 1.37234313s
	I1205 06:23:36.371420   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:36.390415   48520 ssh_runner.go:195] Run: cat /version.json
	I1205 06:23:36.390468   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.391053   48520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:23:36.391113   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.419642   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.424516   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.520794   48520 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:23:36.520923   48520 ssh_runner.go:195] Run: systemctl --version
	I1205 06:23:36.606649   48520 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:23:36.609416   48520 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:23:36.609453   48520 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:23:36.609534   48520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:23:36.613918   48520 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:23:36.613964   48520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:23:36.614023   48520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:23:36.621686   48520 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:23:36.621710   48520 start.go:496] detecting cgroup driver to use...
	I1205 06:23:36.621769   48520 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:23:36.621841   48520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:23:36.637331   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:23:36.650267   48520 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:23:36.650327   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:23:36.665934   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:23:36.679279   48520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:23:36.785775   48520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:23:36.894469   48520 docker.go:234] disabling docker service ...
	I1205 06:23:36.894545   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:23:36.910313   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:23:36.923239   48520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:23:37.033287   48520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:23:37.168163   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:23:37.180578   48520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:23:37.193942   48520 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:23:37.194023   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:23:37.202471   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:23:37.211003   48520 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:23:37.211119   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:23:37.219839   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.228562   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:23:37.237276   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.245970   48520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:23:37.253895   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:23:37.262450   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:23:37.271505   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:23:37.280464   48520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:23:37.287174   48520 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:23:37.288154   48520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:23:37.295694   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.408389   48520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:23:37.517122   48520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:23:37.517255   48520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:23:37.521337   48520 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1205 06:23:37.521369   48520 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:23:37.521389   48520 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1205 06:23:37.521397   48520 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:37.521404   48520 command_runner.go:130] > Access: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521409   48520 command_runner.go:130] > Modify: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521418   48520 command_runner.go:130] > Change: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521422   48520 command_runner.go:130] >  Birth: -
	I1205 06:23:37.521666   48520 start.go:564] Will wait 60s for crictl version
	I1205 06:23:37.521723   48520 ssh_runner.go:195] Run: which crictl
	I1205 06:23:37.524716   48520 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:23:37.525219   48520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:23:37.548325   48520 command_runner.go:130] > Version:  0.1.0
	I1205 06:23:37.548510   48520 command_runner.go:130] > RuntimeName:  containerd
	I1205 06:23:37.548666   48520 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1205 06:23:37.548827   48520 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:23:37.551185   48520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:23:37.551250   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.571456   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.573276   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.591907   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.597675   48520 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:23:37.598882   48520 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:23:37.617416   48520 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:23:37.621349   48520 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:23:37.621511   48520 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:23:37.621626   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:37.621687   48520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:23:37.643465   48520 command_runner.go:130] > {
	I1205 06:23:37.643493   48520 command_runner.go:130] >   "images":  [
	I1205 06:23:37.643498   48520 command_runner.go:130] >     {
	I1205 06:23:37.643515   48520 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:23:37.643522   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643527   48520 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:23:37.643531   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643535   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643540   48520 command_runner.go:130] >       "size":  "8032639",
	I1205 06:23:37.643545   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643549   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643552   48520 command_runner.go:130] >     },
	I1205 06:23:37.643566   48520 command_runner.go:130] >     {
	I1205 06:23:37.643574   48520 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:23:37.643578   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643583   48520 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:23:37.643586   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643591   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643597   48520 command_runner.go:130] >       "size":  "21166088",
	I1205 06:23:37.643601   48520 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:23:37.643605   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643608   48520 command_runner.go:130] >     },
	I1205 06:23:37.643611   48520 command_runner.go:130] >     {
	I1205 06:23:37.643618   48520 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:23:37.643622   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643627   48520 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:23:37.643630   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643634   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643638   48520 command_runner.go:130] >       "size":  "21134420",
	I1205 06:23:37.643642   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643645   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643648   48520 command_runner.go:130] >       },
	I1205 06:23:37.643652   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643656   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643660   48520 command_runner.go:130] >     },
	I1205 06:23:37.643663   48520 command_runner.go:130] >     {
	I1205 06:23:37.643670   48520 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:23:37.643674   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643687   48520 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:23:37.643693   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643698   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643703   48520 command_runner.go:130] >       "size":  "24676285",
	I1205 06:23:37.643707   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643715   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643719   48520 command_runner.go:130] >       },
	I1205 06:23:37.643727   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643734   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643737   48520 command_runner.go:130] >     },
	I1205 06:23:37.643740   48520 command_runner.go:130] >     {
	I1205 06:23:37.643747   48520 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:23:37.643750   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643756   48520 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:23:37.643759   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643763   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643767   48520 command_runner.go:130] >       "size":  "20658969",
	I1205 06:23:37.643771   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643783   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643790   48520 command_runner.go:130] >       },
	I1205 06:23:37.643794   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643798   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643800   48520 command_runner.go:130] >     },
	I1205 06:23:37.643804   48520 command_runner.go:130] >     {
	I1205 06:23:37.643811   48520 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:23:37.643817   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643822   48520 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:23:37.643826   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643830   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643835   48520 command_runner.go:130] >       "size":  "22428165",
	I1205 06:23:37.643840   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643844   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643853   48520 command_runner.go:130] >     },
	I1205 06:23:37.643856   48520 command_runner.go:130] >     {
	I1205 06:23:37.643863   48520 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:23:37.643867   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643873   48520 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:23:37.643878   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643887   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643893   48520 command_runner.go:130] >       "size":  "15389290",
	I1205 06:23:37.643900   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643905   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643908   48520 command_runner.go:130] >       },
	I1205 06:23:37.643911   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643915   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643918   48520 command_runner.go:130] >     },
	I1205 06:23:37.643921   48520 command_runner.go:130] >     {
	I1205 06:23:37.644021   48520 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:23:37.644028   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.644033   48520 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:23:37.644036   48520 command_runner.go:130] >       ],
	I1205 06:23:37.644041   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.644045   48520 command_runner.go:130] >       "size":  "265458",
	I1205 06:23:37.644049   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.644056   48520 command_runner.go:130] >         "value":  "65535"
	I1205 06:23:37.644060   48520 command_runner.go:130] >       },
	I1205 06:23:37.644064   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.644075   48520 command_runner.go:130] >       "pinned":  true
	I1205 06:23:37.644078   48520 command_runner.go:130] >     }
	I1205 06:23:37.644081   48520 command_runner.go:130] >   ]
	I1205 06:23:37.644084   48520 command_runner.go:130] > }
	I1205 06:23:37.646462   48520 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:23:37.646482   48520 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:23:37.646489   48520 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:23:37.646588   48520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:23:37.646657   48520 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:23:37.674707   48520 command_runner.go:130] > {
	I1205 06:23:37.674726   48520 command_runner.go:130] >   "cniconfig": {
	I1205 06:23:37.674732   48520 command_runner.go:130] >     "Networks": [
	I1205 06:23:37.674735   48520 command_runner.go:130] >       {
	I1205 06:23:37.674741   48520 command_runner.go:130] >         "Config": {
	I1205 06:23:37.674745   48520 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1205 06:23:37.674752   48520 command_runner.go:130] >           "Name": "cni-loopback",
	I1205 06:23:37.674757   48520 command_runner.go:130] >           "Plugins": [
	I1205 06:23:37.674761   48520 command_runner.go:130] >             {
	I1205 06:23:37.674765   48520 command_runner.go:130] >               "Network": {
	I1205 06:23:37.674769   48520 command_runner.go:130] >                 "ipam": {},
	I1205 06:23:37.674775   48520 command_runner.go:130] >                 "type": "loopback"
	I1205 06:23:37.674779   48520 command_runner.go:130] >               },
	I1205 06:23:37.674785   48520 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1205 06:23:37.674788   48520 command_runner.go:130] >             }
	I1205 06:23:37.674792   48520 command_runner.go:130] >           ],
	I1205 06:23:37.674802   48520 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1205 06:23:37.674806   48520 command_runner.go:130] >         },
	I1205 06:23:37.674813   48520 command_runner.go:130] >         "IFName": "lo"
	I1205 06:23:37.674816   48520 command_runner.go:130] >       }
	I1205 06:23:37.674820   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674825   48520 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1205 06:23:37.674829   48520 command_runner.go:130] >     "PluginDirs": [
	I1205 06:23:37.674832   48520 command_runner.go:130] >       "/opt/cni/bin"
	I1205 06:23:37.674836   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674840   48520 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1205 06:23:37.674844   48520 command_runner.go:130] >     "Prefix": "eth"
	I1205 06:23:37.674846   48520 command_runner.go:130] >   },
	I1205 06:23:37.674850   48520 command_runner.go:130] >   "config": {
	I1205 06:23:37.674854   48520 command_runner.go:130] >     "cdiSpecDirs": [
	I1205 06:23:37.674858   48520 command_runner.go:130] >       "/etc/cdi",
	I1205 06:23:37.674862   48520 command_runner.go:130] >       "/var/run/cdi"
	I1205 06:23:37.674871   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674875   48520 command_runner.go:130] >     "cni": {
	I1205 06:23:37.674879   48520 command_runner.go:130] >       "binDir": "",
	I1205 06:23:37.674883   48520 command_runner.go:130] >       "binDirs": [
	I1205 06:23:37.674888   48520 command_runner.go:130] >         "/opt/cni/bin"
	I1205 06:23:37.674891   48520 command_runner.go:130] >       ],
	I1205 06:23:37.674895   48520 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1205 06:23:37.674899   48520 command_runner.go:130] >       "confTemplate": "",
	I1205 06:23:37.674903   48520 command_runner.go:130] >       "ipPref": "",
	I1205 06:23:37.674907   48520 command_runner.go:130] >       "maxConfNum": 1,
	I1205 06:23:37.674911   48520 command_runner.go:130] >       "setupSerially": false,
	I1205 06:23:37.674916   48520 command_runner.go:130] >       "useInternalLoopback": false
	I1205 06:23:37.674919   48520 command_runner.go:130] >     },
	I1205 06:23:37.674927   48520 command_runner.go:130] >     "containerd": {
	I1205 06:23:37.674932   48520 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1205 06:23:37.674937   48520 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1205 06:23:37.674942   48520 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1205 06:23:37.674946   48520 command_runner.go:130] >       "runtimes": {
	I1205 06:23:37.674950   48520 command_runner.go:130] >         "runc": {
	I1205 06:23:37.674955   48520 command_runner.go:130] >           "ContainerAnnotations": null,
	I1205 06:23:37.674959   48520 command_runner.go:130] >           "PodAnnotations": null,
	I1205 06:23:37.674965   48520 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1205 06:23:37.674969   48520 command_runner.go:130] >           "cgroupWritable": false,
	I1205 06:23:37.674974   48520 command_runner.go:130] >           "cniConfDir": "",
	I1205 06:23:37.674978   48520 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1205 06:23:37.674982   48520 command_runner.go:130] >           "io_type": "",
	I1205 06:23:37.674986   48520 command_runner.go:130] >           "options": {
	I1205 06:23:37.674990   48520 command_runner.go:130] >             "BinaryName": "",
	I1205 06:23:37.674994   48520 command_runner.go:130] >             "CriuImagePath": "",
	I1205 06:23:37.674998   48520 command_runner.go:130] >             "CriuWorkPath": "",
	I1205 06:23:37.675002   48520 command_runner.go:130] >             "IoGid": 0,
	I1205 06:23:37.675006   48520 command_runner.go:130] >             "IoUid": 0,
	I1205 06:23:37.675011   48520 command_runner.go:130] >             "NoNewKeyring": false,
	I1205 06:23:37.675018   48520 command_runner.go:130] >             "Root": "",
	I1205 06:23:37.675022   48520 command_runner.go:130] >             "ShimCgroup": "",
	I1205 06:23:37.675026   48520 command_runner.go:130] >             "SystemdCgroup": false
	I1205 06:23:37.675030   48520 command_runner.go:130] >           },
	I1205 06:23:37.675035   48520 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1205 06:23:37.675042   48520 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1205 06:23:37.675046   48520 command_runner.go:130] >           "runtimePath": "",
	I1205 06:23:37.675051   48520 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1205 06:23:37.675055   48520 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1205 06:23:37.675059   48520 command_runner.go:130] >           "snapshotter": ""
	I1205 06:23:37.675062   48520 command_runner.go:130] >         }
	I1205 06:23:37.675065   48520 command_runner.go:130] >       }
	I1205 06:23:37.675068   48520 command_runner.go:130] >     },
	I1205 06:23:37.675077   48520 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1205 06:23:37.675082   48520 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1205 06:23:37.675087   48520 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1205 06:23:37.675091   48520 command_runner.go:130] >     "disableApparmor": false,
	I1205 06:23:37.675096   48520 command_runner.go:130] >     "disableHugetlbController": true,
	I1205 06:23:37.675100   48520 command_runner.go:130] >     "disableProcMount": false,
	I1205 06:23:37.675104   48520 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1205 06:23:37.675108   48520 command_runner.go:130] >     "enableCDI": true,
	I1205 06:23:37.675112   48520 command_runner.go:130] >     "enableSelinux": false,
	I1205 06:23:37.675117   48520 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1205 06:23:37.675121   48520 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1205 06:23:37.675126   48520 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1205 06:23:37.675131   48520 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1205 06:23:37.675135   48520 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1205 06:23:37.675139   48520 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1205 06:23:37.675144   48520 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1205 06:23:37.675150   48520 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675154   48520 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1205 06:23:37.675159   48520 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675164   48520 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1205 06:23:37.675172   48520 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1205 06:23:37.675176   48520 command_runner.go:130] >   },
	I1205 06:23:37.675179   48520 command_runner.go:130] >   "features": {
	I1205 06:23:37.675184   48520 command_runner.go:130] >     "supplemental_groups_policy": true
	I1205 06:23:37.675187   48520 command_runner.go:130] >   },
	I1205 06:23:37.675190   48520 command_runner.go:130] >   "golang": "go1.24.9",
	I1205 06:23:37.675201   48520 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675211   48520 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675215   48520 command_runner.go:130] >   "runtimeHandlers": [
	I1205 06:23:37.675218   48520 command_runner.go:130] >     {
	I1205 06:23:37.675222   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675227   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675231   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675234   48520 command_runner.go:130] >       }
	I1205 06:23:37.675237   48520 command_runner.go:130] >     },
	I1205 06:23:37.675240   48520 command_runner.go:130] >     {
	I1205 06:23:37.675244   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675249   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675253   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675257   48520 command_runner.go:130] >       },
	I1205 06:23:37.675261   48520 command_runner.go:130] >       "name": "runc"
	I1205 06:23:37.675264   48520 command_runner.go:130] >     }
	I1205 06:23:37.675267   48520 command_runner.go:130] >   ],
	I1205 06:23:37.675270   48520 command_runner.go:130] >   "status": {
	I1205 06:23:37.675273   48520 command_runner.go:130] >     "conditions": [
	I1205 06:23:37.675277   48520 command_runner.go:130] >       {
	I1205 06:23:37.675280   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675284   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675288   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675292   48520 command_runner.go:130] >         "type": "RuntimeReady"
	I1205 06:23:37.675295   48520 command_runner.go:130] >       },
	I1205 06:23:37.675298   48520 command_runner.go:130] >       {
	I1205 06:23:37.675304   48520 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1205 06:23:37.675312   48520 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1205 06:23:37.675316   48520 command_runner.go:130] >         "status": false,
	I1205 06:23:37.675320   48520 command_runner.go:130] >         "type": "NetworkReady"
	I1205 06:23:37.675323   48520 command_runner.go:130] >       },
	I1205 06:23:37.675326   48520 command_runner.go:130] >       {
	I1205 06:23:37.675330   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675334   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675338   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675343   48520 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1205 06:23:37.675347   48520 command_runner.go:130] >       }
	I1205 06:23:37.675350   48520 command_runner.go:130] >     ]
	I1205 06:23:37.675353   48520 command_runner.go:130] >   }
	I1205 06:23:37.675356   48520 command_runner.go:130] > }
	I1205 06:23:37.675685   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:37.675695   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:37.675709   48520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:23:37.675732   48520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:23:37.675850   48520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:23:37.675917   48520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:23:37.682806   48520 command_runner.go:130] > kubeadm
	I1205 06:23:37.682826   48520 command_runner.go:130] > kubectl
	I1205 06:23:37.682831   48520 command_runner.go:130] > kubelet
	I1205 06:23:37.683692   48520 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:23:37.683790   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:23:37.691316   48520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:23:37.703871   48520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:23:37.716284   48520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 06:23:37.728952   48520 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:23:37.732950   48520 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:23:37.733083   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.845498   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:37.867115   48520 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:23:37.867139   48520 certs.go:195] generating shared ca certs ...
	I1205 06:23:37.867158   48520 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:37.867407   48520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:23:37.867492   48520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:23:37.867536   48520 certs.go:257] generating profile certs ...
	I1205 06:23:37.867696   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:23:37.867788   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:23:37.867863   48520 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:23:37.867878   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:23:37.867909   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:23:37.867937   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:23:37.867957   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:23:37.867990   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:23:37.868021   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:23:37.868041   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:23:37.868082   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:23:37.868158   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:23:37.868216   48520 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:23:37.868231   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:23:37.868276   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:23:37.868325   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:23:37.868373   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:23:37.868453   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:37.868510   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:37.868541   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem -> /usr/share/ca-certificates/4192.pem
	I1205 06:23:37.868568   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /usr/share/ca-certificates/41922.pem
	I1205 06:23:37.869214   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:23:37.888705   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:23:37.907292   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:23:37.928487   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:23:37.946435   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:23:37.964299   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:23:37.982113   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:23:37.999555   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:23:38.025054   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:23:38.044579   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:23:38.064934   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:23:38.085119   48520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:23:38.098666   48520 ssh_runner.go:195] Run: openssl version
	I1205 06:23:38.104661   48520 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:23:38.105114   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.112530   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:23:38.119940   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123892   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123985   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.124059   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.164658   48520 command_runner.go:130] > 51391683
	I1205 06:23:38.165135   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:23:38.172385   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.179652   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:23:38.187250   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190908   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190946   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190996   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.231356   48520 command_runner.go:130] > 3ec20f2e
	I1205 06:23:38.231428   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:23:38.238676   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.245835   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:23:38.252946   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256642   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256892   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256951   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.296975   48520 command_runner.go:130] > b5213941
	I1205 06:23:38.297434   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:23:38.304845   48520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308564   48520 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308587   48520 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:23:38.308594   48520 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1205 06:23:38.308601   48520 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:38.308607   48520 command_runner.go:130] > Access: 2025-12-05 06:19:31.018816392 +0000
	I1205 06:23:38.308612   48520 command_runner.go:130] > Modify: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308618   48520 command_runner.go:130] > Change: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308623   48520 command_runner.go:130] >  Birth: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308692   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:23:38.348984   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.349475   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:23:38.394714   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.395243   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:23:38.435818   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.436261   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:23:38.476805   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.477267   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:23:38.518071   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.518611   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:23:38.561014   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.561491   48520 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:38.561574   48520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:23:38.561660   48520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:23:38.588277   48520 cri.go:89] found id: ""
	I1205 06:23:38.588366   48520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:23:38.596406   48520 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:23:38.596430   48520 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:23:38.596438   48520 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:23:38.597543   48520 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:23:38.597605   48520 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:23:38.597685   48520 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:23:38.607655   48520 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:23:38.608093   48520 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.608241   48520 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "functional-101526" cluster setting kubeconfig missing "functional-101526" context setting]
	I1205 06:23:38.608622   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.609091   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.609324   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.609886   48520 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:23:38.610063   48520 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:23:38.610057   48520 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:23:38.610120   48520 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:23:38.610139   48520 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:23:38.610175   48520 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:23:38.610495   48520 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:23:38.619299   48520 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:23:38.619367   48520 kubeadm.go:602] duration metric: took 21.74243ms to restartPrimaryControlPlane
	I1205 06:23:38.619392   48520 kubeadm.go:403] duration metric: took 57.910865ms to StartCluster
	I1205 06:23:38.619420   48520 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.619502   48520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.620189   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.620458   48520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 06:23:38.620608   48520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:23:38.620940   48520 addons.go:70] Setting storage-provisioner=true in profile "functional-101526"
	I1205 06:23:38.621064   48520 addons.go:239] Setting addon storage-provisioner=true in "functional-101526"
	I1205 06:23:38.621113   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.620703   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:38.621254   48520 addons.go:70] Setting default-storageclass=true in profile "functional-101526"
	I1205 06:23:38.621267   48520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-101526"
	I1205 06:23:38.621543   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.621837   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.622827   48520 out.go:179] * Verifying Kubernetes components...
	I1205 06:23:38.624023   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:38.667927   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.668094   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.668372   48520 addons.go:239] Setting addon default-storageclass=true in "functional-101526"
	I1205 06:23:38.668400   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.668811   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.682967   48520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:23:38.684152   48520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.684170   48520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:23:38.684236   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.712186   48520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:38.712208   48520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:23:38.712271   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.728758   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.759681   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.830869   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:38.880502   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.894150   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.597389   48520 node_ready.go:35] waiting up to 6m0s for node "functional-101526" to be "Ready" ...
	I1205 06:23:39.597462   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597505   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597540   48520 retry.go:31] will retry after 347.041569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597590   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597614   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597624   48520 retry.go:31] will retry after 291.359395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:23:39.597730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:39.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:39.889264   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.945727   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:39.950448   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.950487   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.950523   48520 retry.go:31] will retry after 542.352885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018611   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.018720   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018748   48520 retry.go:31] will retry after 498.666832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.098033   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.098325   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.493962   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:40.518418   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:40.562108   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.562226   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.562260   48520 retry.go:31] will retry after 406.138698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588025   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.588062   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588081   48520 retry.go:31] will retry after 594.532888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.598248   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.598327   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.598636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.969306   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.034172   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.037396   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.037482   48520 retry.go:31] will retry after 875.411269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.098568   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.098689   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.098986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:41.183391   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:41.246665   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.246713   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.246732   48520 retry.go:31] will retry after 928.241992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.598231   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.598321   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:41.598695   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:41.913216   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.971936   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.975346   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.975382   48520 retry.go:31] will retry after 1.177811903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:42.175570   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:42.247042   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:42.247165   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.247197   48520 retry.go:31] will retry after 1.26909991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.598419   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.598544   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.598893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.097717   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.098051   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.154349   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:43.214165   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.217853   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.217885   48520 retry.go:31] will retry after 2.752289429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.517328   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:43.580346   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.580405   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.580434   48520 retry.go:31] will retry after 2.299289211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.598503   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.598628   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.598995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:43.599083   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:44.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.098502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.098803   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:44.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.597856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.097813   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.097918   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.098342   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.597661   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.880606   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:45.938914   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:45.938948   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.938966   48520 retry.go:31] will retry after 2.215203034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.971116   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:46.035840   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:46.035877   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.035895   48520 retry.go:31] will retry after 2.493998942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.098074   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.098239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.098559   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:46.098611   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:46.598405   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.598501   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.598815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.098358   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.098432   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.098766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.598407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.598667   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:48.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.098899   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:48.098950   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:48.155209   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:48.214464   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.214512   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.214531   48520 retry.go:31] will retry after 5.617095307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.530967   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:48.587770   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.587811   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.587831   48520 retry.go:31] will retry after 3.714896929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.598174   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.598240   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.598490   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.098439   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.098511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.597635   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.597708   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.097641   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.098020   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.598128   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:50.598177   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:51.097653   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.097726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:51.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.598434   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.598708   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.098476   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.098552   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.098854   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.303312   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:52.364380   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:52.367543   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.367573   48520 retry.go:31] will retry after 3.56011918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.597990   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.598059   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.598330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:52.598370   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:53.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.097720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.097995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.598131   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.832691   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:53.932471   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:53.935567   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:53.935601   48520 retry.go:31] will retry after 7.968340753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:54.098032   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.098119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.098504   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:54.598332   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.598408   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.598700   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:54.598750   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:55.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.098636   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.598452   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.598735   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.928461   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:55.985797   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:55.985849   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:55.985868   48520 retry.go:31] will retry after 13.95380646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:56.098043   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.098142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:56.598257   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.598332   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.598591   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:57.098338   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.098418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:57.098806   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:57.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.598727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.098565   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.098653   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.098993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.597995   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.598071   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.598388   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:59.598441   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:00.097798   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.097895   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.098216   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:00.598109   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.598187   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.598469   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.098232   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.098656   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.598378   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.598756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:01.598798   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:01.904244   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:01.963282   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:01.966528   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:01.966559   48520 retry.go:31] will retry after 12.949527151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:02.097647   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.098069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:02.597723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.597819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.598178   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.097745   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.098222   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.597893   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.597959   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.598249   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:04.097760   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.098267   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:04.098317   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:04.598025   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.598124   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.598425   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.098484   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.098557   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.098824   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.598589   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.598684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.599025   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.098166   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.597592   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.597662   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.597933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:06.597973   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:07.098457   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.098530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.098893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:07.598367   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.598458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.598757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.098344   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.098429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.098757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.598492   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.598559   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.598841   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:08.598881   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:09.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.097973   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.098345   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.598107   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.598174   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.598441   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.939938   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:09.995364   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:09.998554   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:09.998588   48520 retry.go:31] will retry after 16.114489594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:10.097931   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.098044   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.098385   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:10.598110   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.598191   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.598513   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:11.098275   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.098615   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:11.098670   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:11.598400   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.598740   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.098540   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.098616   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.097746   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.097819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.098163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.597759   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.597834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.598122   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:13.598175   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:14.097627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.097709   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.597953   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.598020   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.598324   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.916824   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:14.975576   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:14.975628   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:14.975646   48520 retry.go:31] will retry after 12.242306889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:15.097909   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.098005   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.098359   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:15.597934   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.598277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:15.598320   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:16.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:16.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.597791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.598100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.098010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.597774   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.597845   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.598218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:18.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:18.098183   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:18.598335   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.598405   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.598680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.098583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.098655   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.597882   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.597965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.598257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:20.097767   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.097837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.098151   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:20.098210   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:20.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.597821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.598163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.097868   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.097944   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.597748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:22.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.097863   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:22.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:22.597927   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.598018   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.097757   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.097834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.098165   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.598412   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:24.598451   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:25.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.097818   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.098201   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:25.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.598206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.097703   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.114242   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:26.182245   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:26.182291   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.182309   48520 retry.go:31] will retry after 20.133806896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.597729   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.597815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:27.097723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:27.098168   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:27.218635   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:27.278311   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:27.278351   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.278369   48520 retry.go:31] will retry after 29.943294063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.597675   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.597766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.598047   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.097690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.098089   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.597760   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.598077   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.597938   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.598028   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.598339   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:29.598384   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:30.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:30.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.097803   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.098330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.597811   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.598159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:32.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:32.098247   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:32.598587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.598658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.097615   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.097683   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.098041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.598348   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.598685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:34.098505   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.098598   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.098917   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:34.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:34.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.598097   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.098294   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.598401   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.598478   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.598810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:36.098627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.098700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.099015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:36.099064   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:36.597658   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.598106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.098117   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.598093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.098206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.597747   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:38.598117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:39.097836   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.097928   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.098334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:39.598071   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.598143   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.598413   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.098336   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.098679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.598808   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:40.598849   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:41.098353   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.098417   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.098669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:41.598525   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.598609   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.597659   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:43.097673   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.098074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:43.098136   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:43.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.597761   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.098370   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.098629   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.598627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.598699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.599010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:45.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.097907   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:45.098408   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:45.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.597740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.098586   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.098659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.098977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.316378   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:46.382136   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:46.385605   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.385642   48520 retry.go:31] will retry after 25.45198813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.598118   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.598219   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.598522   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:47.098288   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.098354   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.098627   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:47.098682   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:47.598404   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.598746   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.098648   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.099013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.598372   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.598439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.598709   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:49.098599   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.099061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:49.099113   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.598014   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.598306   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.097691   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.598564   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.598829   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.097583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.097659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.098037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.598325   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.598399   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:51.598761   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:52.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.098621   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.098978   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.597773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.097590   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.097657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.097905   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.597594   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.597666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.597973   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:54.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:54.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:54.597977   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.598054   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.598305   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.097821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.598396   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.598475   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:56.098321   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.098407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.098685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:56.098727   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:56.598502   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.598876   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.097587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.097675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.097966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.222289   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:57.284849   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:57.284890   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.284910   48520 retry.go:31] will retry after 41.469992375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.598343   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.598669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:58.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.098574   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.098880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:58.098930   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:58.597606   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.597675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.098608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.098916   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.597662   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.097620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.097697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.598474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:00.598791   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:01.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.099039   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:01.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.597775   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.598053   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.098050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:03.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.097804   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.098169   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:03.098231   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:03.597623   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.597691   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.097739   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.098119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.597929   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.598003   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:05.098361   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.098426   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:05.098730   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:05.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.598783   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.098625   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.098705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.099060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.598425   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.598694   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:07.098518   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:07.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:07.597640   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.598023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.097648   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.098028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.597762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.097853   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.598150   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.598411   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:09.598454   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:10.097719   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:10.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.598121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.097959   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.838548   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:25:11.913959   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914006   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914113   48520 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:12.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.098446   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.098756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:12.098805   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:12.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.598398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.598661   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.098442   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.098525   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.598638   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.599017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.097669   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.098009   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.597973   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.598377   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:14.598425   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:15.098092   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.098173   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.098548   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:15.598315   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.598383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.598676   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.098414   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.098500   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.098815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.598530   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.598606   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.598956   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:16.599009   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:17.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.097731   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:17.597681   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.097848   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.098264   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.597980   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.598079   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.598336   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:19.098419   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.098509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:19.098915   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:19.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.097977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.597733   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.097758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.098096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.598679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:21.598737   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:22.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.098935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:22.597673   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.098687   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:23.598843   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:24.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.098699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.099069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:24.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.597963   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.598230   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.097788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.098109   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.597831   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.597926   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:26.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.097972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:26.098033   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:26.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.097823   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.097896   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.597972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:28.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.098036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:28.098084   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:28.597722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.598154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.097749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.098021   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.597987   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.598315   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:30.098008   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.098085   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.098479   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:30.098542   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:30.598023   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.598099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.598365   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.097739   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.098082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.597660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.598050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.097729   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.097985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.597714   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:32.598157   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:33.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:33.597803   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.597872   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.598133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.098121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.598211   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.598290   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.598585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:34.598631   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:35.098390   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.098471   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:35.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.598657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.598992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.097793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.598358   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.598693   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:36.598731   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:37.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.098568   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.098894   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:37.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.598057   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.599817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:25:38.097679   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:38.598262   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.598388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:38.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:38.755357   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:25:38.811504   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811556   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811634   48520 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:38.813895   48520 out.go:179] * Enabled addons: 
	I1205 06:25:38.815272   48520 addons.go:530] duration metric: took 2m0.19467206s for enable addons: enabled=[]
	I1205 06:25:39.097850   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.097947   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.098277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.098242   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.098311   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.098643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.598717   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:41.098378   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.098451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:41.098817   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:41.598539   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.598608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.598921   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.097686   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.597727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.598041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.097789   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.097885   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.098205   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.597913   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.597988   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:43.598385   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:44.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.097735   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.098040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:44.598010   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.598096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.098016   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.098099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.098496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.597755   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.597830   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.598148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:46.097840   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.097939   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.098311   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:46.098366   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:46.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.598111   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.598421   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.098155   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.098226   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.098489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.598715   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:48.098525   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:48.099014   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:48.598319   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.598387   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.598646   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.098618   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.098694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.099074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.597928   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.598344   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.098007   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.098092   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.098397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.598202   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.598496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:50.598545   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:51.098285   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.098357   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:51.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.098404   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.098477   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.098809   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.598593   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.598670   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.598948   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:52.598996   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:53.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.097712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:53.597701   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.597798   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.598156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.097890   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.097965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.098294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.598153   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.598231   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.598502   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:55.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.098413   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.098774   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:55.098829   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:55.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.598649   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.598924   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.098373   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.098641   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.598457   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.598530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:57.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.098633   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:57.098974   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:57.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.598416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.598776   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.098490   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.098937   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.598427   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.598511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.598848   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.597568   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.597645   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.597976   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:59.598030   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:00.098475   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.098940   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:00.598316   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.598385   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.598643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.098402   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.098479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.098749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:01.598947   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:02.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.098398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.098727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:02.598537   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.598620   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.598964   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.098104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.598364   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.598437   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.598722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:04.098558   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.098639   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.099052   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:04.099124   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:04.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.597957   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.097925   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.097993   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.597982   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.598052   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.598387   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.098263   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.598455   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.598714   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:06.598754   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:07.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.098585   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.098898   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:07.597622   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.097980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.597757   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.598102   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:09.097870   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.097951   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.098248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:09.098294   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:09.598112   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.598239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.598574   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.098411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.098493   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.597577   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.597650   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.597981   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:11.098430   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.098504   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:11.098808   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:11.598510   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.598593   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.598863   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.097604   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.097676   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.097998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.597698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.098093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.597735   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.597806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:13.598192   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:14.098341   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.098414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:14.597561   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.597634   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.597953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.097674   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.597858   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.597937   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:15.598252   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:16.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.098136   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:16.597837   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.598258   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.097730   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.097795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:18.097743   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.097833   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:18.098240   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:18.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.598040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.097951   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.098029   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.598489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:20.098213   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.098283   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.098535   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:20.098577   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:20.598411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.598481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.098642   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.098953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.598371   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.598445   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:22.098546   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.098626   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.098949   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:22.099002   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:22.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.597742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.598072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.098292   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.098363   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.098623   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.598300   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.598378   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.598681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.098570   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.098890   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.597874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.597942   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.598193   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:24.598235   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:25.097954   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.098049   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.098380   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:25.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.098335   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.098599   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:26.598819   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:27.098598   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.098666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.098997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:27.598342   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.598674   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.098464   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.098548   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.098911   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.598054   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:29.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:29.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:29.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.598512   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.098722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.598362   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.598429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.598778   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:31.098515   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.098594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.098941   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:31.099003   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:31.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.598069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.097668   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.097740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.597672   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.598073   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.098170   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.598322   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.598390   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:33.598681   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:34.098437   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.098514   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.098910   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:34.597764   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.598152   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.097815   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.097898   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.598058   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:36.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.098272   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:36.098329   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:36.597583   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.597647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.597901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.097624   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.597700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.097738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.597805   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.598144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:38.598199   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:39.097874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.097953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.098299   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.598120   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.598381   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.598235   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:40.598299   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:41.097613   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.097684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.097934   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:41.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.598095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.097741   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.098252   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.597939   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.598259   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:43.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.098098   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:43.098152   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:43.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.597750   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.097834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.598119   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.598510   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:45.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:45.098935   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:45.598338   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.598404   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.598666   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.098497   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.098980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.597691   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.098061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.598104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:47.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:48.097827   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:48.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.597728   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.597996   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.597943   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.598016   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.598353   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:49.598407   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:50.097817   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.097883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:50.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.597641   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.597986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:52.097689   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.098130   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:52.098200   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.598147   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.097621   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.097992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:54.097842   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.097924   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:54.098348   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:54.598061   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.598132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.097700   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.598059   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:56.098320   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.098388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.098645   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:56.098686   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:56.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.598594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.598880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.097600   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.097674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.097997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:58.098426   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.098498   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.098810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:58.098866   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:58.597573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.597644   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.597980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.098351   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.098416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.098680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.598057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:00.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.099364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:27:00.099443   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:00.598194   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.598268   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.598536   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.098258   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.098330   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.598444   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.598519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.098423   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.098519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.098885   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.597608   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.597679   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.598006   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:02.598063   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:03.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:03.597631   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.598018   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.098154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:04.598512   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:05.098189   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.098594   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:05.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.598766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.098531   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.098612   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.598739   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:06.598794   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:07.098498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.098896   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:07.598506   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.598576   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.598842   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.098375   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.098481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.098796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.598436   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.598506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.598870   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:08.598917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:09.098638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.098713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.099031   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:09.598009   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.598075   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.098104   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.098192   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.098576   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.598351   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.598430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.598734   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:11.098387   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.098458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.098711   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:11.098748   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:11.598498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.598845   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.097611   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.098027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.597725   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.597777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.598079   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:13.598137   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:14.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:14.598045   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.598450   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.098264   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.098347   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.598409   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.598702   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:15.598770   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:16.098516   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.098587   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.098908   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:16.598224   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.598302   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.598621   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.098312   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.098380   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.598428   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.598505   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.598807   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:17.598861   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:18.098642   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.098716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.099040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:18.597649   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.597719   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.598036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.097919   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.097999   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.098304   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.598170   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:20.098308   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:20.098698   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:20.598481   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.598549   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.097999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.597636   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.097711   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.098159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.597596   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.597665   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.597997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:22.598069   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:23.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.097723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.098062   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.598043   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.097740   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.097814   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.598060   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.598131   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:24.598468   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:25.098259   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.098337   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.098681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:25.598479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.598553   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.598817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.098403   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.598420   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.598491   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:26.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:27.098591   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.098669   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:27.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.597699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.097670   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.098120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.597838   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.597913   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.598248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:29.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.097724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.097974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:29.098015   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:29.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.598034   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.097797   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.098177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.597863   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.597934   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.598220   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:31.097682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.098095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:31.098146   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:31.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.597758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.598081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.097685   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.597761   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:33.097892   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.097972   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.098291   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:33.098349   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:33.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.598028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.598024   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.598102   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:35.098221   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.098585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:35.098636   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:35.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.598796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.098479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.098560   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.098901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.598360   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.598431   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:37.098449   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:37.098917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:37.598548   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.598615   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.598889   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.100439   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.100533   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.100821   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.598323   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.598665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:39.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.098663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.099057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:39.099121   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:39.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.098100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.597799   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.597871   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.598186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.097664   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.097730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.097993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.597682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:41.598120   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:42.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.098183   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:42.597650   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.098084   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.598402   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.598480   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.598777   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:43.598825   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:44.098381   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.098721   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:44.597605   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.597678   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.097828   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.098242   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.597563   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.597635   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.597935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:46.097599   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.097672   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.097994   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:46.098054   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:46.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.098397   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.098474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.098743   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.598385   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.598785   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:48.098501   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.098580   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.098912   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:48.098971   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:48.598554   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.598624   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.598891   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.097780   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.098243   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.598130   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.598205   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.098061   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.098132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.098478   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.598272   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.598348   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:50.598692   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:51.098399   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.098484   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.098838   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:51.598356   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.598698   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.098686   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.098782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.099141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.597853   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.597931   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.598221   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:53.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.098015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:53.597727   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.597807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.097889   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.097964   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.598052   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.598384   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:55.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.098128   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.098471   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:55.098525   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:55.598047   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.598443   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.098223   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.098308   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.098582   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.598347   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.598418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.598724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:57.098522   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.098600   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.098946   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:57.099013   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:57.597645   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.597724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.598038   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.097753   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.098035   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.598043   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.598109   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.598433   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:59.598492   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:00.098337   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.098427   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.098788   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:00.598433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.098654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.098740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.099090   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:02.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.097737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.098064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:02.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:02.597688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.597667   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.597737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.597990   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.098055   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.597984   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.598390   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:04.598444   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:05.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:05.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.597765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:07.097703   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:07.098142   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:07.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.597911   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.598223   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.098023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.597760   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.598171   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:09.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.098013   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.098328   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:09.098389   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:09.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.598116   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.598364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.097762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.098113   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.597798   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.597870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.598188   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.097802   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.597864   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.598173   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:11.598226   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:12.097903   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.097983   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.098374   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:12.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.098032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:14.097811   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.097887   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.098156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:14.098195   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:14.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.598112   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.598466   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.097733   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.098140   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.597813   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.597884   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.598142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.097796   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.598101   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:16.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:17.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.097876   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.098194   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:17.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.597756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.098215   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.597886   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.597953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:18.598246   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:19.098191   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.098596   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:19.598092   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.598164   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.598453   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.098276   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.098366   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.098647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.598966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:20.599023   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:21.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.097783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:21.597816   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.597890   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.598175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.097699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.097778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.597810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.597883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:23.097913   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.097992   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.098257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:23.098301   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:23.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.097782   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.598076   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.598142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.598394   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:25.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.098130   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.098501   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:25.098557   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:25.598048   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.598119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.598461   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.098278   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.098345   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.098636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.598407   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.598479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:27.098588   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.098668   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.099022   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:27.099091   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:27.598346   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.598675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.098428   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.098506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.098818   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.598580   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.598652   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.598974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.097745   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.598001   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.598100   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.598428   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:29.598481   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:30.098006   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.098087   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:30.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.598160   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.097774   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.098181   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.597850   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.597930   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.598261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:32.097657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.097732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.098067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:32.098128   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:32.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.597865   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.598198   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.097897   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.097968   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.098282   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.597749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.597992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.597944   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.598021   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.598350   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:34.598404   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:35.097649   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:35.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.598762   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.098647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.098983   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.598412   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.598488   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.598831   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:36.598888   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:37.098658   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.098727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.099076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:37.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.598120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.097852   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.098158   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.597763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.598087   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:39.097696   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.097815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:39.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:39.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.598118   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.598367   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.097715   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.098133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.597711   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.597788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.598088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.097630   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.097700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:41.598088   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:42.097792   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.098293   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:42.597741   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.097766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:43.598827   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:44.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.098438   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:44.598631   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.598985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.097712   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.097807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.098219   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.597940   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.598275   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:46.097986   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.098060   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.098414   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:46.098473   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:46.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.598322   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.098433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.098851   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.598595   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.598663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.598967   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.097777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.098143   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.598005   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:48.598051   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:49.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.097752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.098085   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.598007   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.598332   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.097650   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.097722   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.098001   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.598180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:50.598236   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:51.097912   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.097985   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.098261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:51.597646   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.597720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.598030   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.097764   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:53.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.097704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:53.597719   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.597789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.098214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.597970   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.598039   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:55.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.098096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:55.098479   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:55.598243   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.598312   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.598632   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.098392   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.598446   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.598523   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.598834   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:57.098623   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.098697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.099008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:57.099062   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:57.597638   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.597977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.597925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.598287   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.098257   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.098326   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.098588   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.598617   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.598687   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.598989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:59.599048   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:00.097762   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.098260   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:00.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.098107   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.597703   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:02.097797   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.098139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:02.098179   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:02.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.597901   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.598200   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.097768   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.597806   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.597879   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.598126   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.597968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.598041   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.598369   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:04.598426   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:05.097837   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.097910   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.098172   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:05.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.597991   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.598317   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.597807   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.597874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.598213   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:07.097658   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.097727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:07.098053   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:07.597689   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.598111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.098144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.597825   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.597894   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.598214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:09.098293   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.098665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:09.098713   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:09.598091   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.598166   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.598438   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.098197   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.098285   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.598426   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.598502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.598789   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.098356   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.098424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.598534   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.598933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:11.598983   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:12.097671   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.097742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.098072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:12.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.598000   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.597778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:14.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.098625   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:14.098664   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:14.598601   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.598674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.598962   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.597712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.597989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:16.598176   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:17.097800   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.098186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:17.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.598071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:19.097692   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.098110   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:19.098171   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:19.597903   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.597976   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.598294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.097887   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.097966   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.098232   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.598138   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:21.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.097904   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.098238   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:21.098290   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:21.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.597989   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.598308   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.097687   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.098076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.597793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.097639   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.098012   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.598468   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.598537   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.598805   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:23.598850   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:24.098628   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.098711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.099042   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:24.598066   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.598140   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.598436   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.098234   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.098304   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.098635   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.598442   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.598521   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.598813   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:26.098310   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.098634   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:26.098672   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:26.598467   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.598836   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.098604   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.098674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.099002   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.597627   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.597979   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.097729   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.097806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.098118   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:28.598160   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:29.097982   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.098056   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:29.598171   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.598241   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.598550   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.098366   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.098794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.598344   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.598414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.598658   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:30.598696   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:31.098524   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.098930   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:31.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.598056   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.598096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:33.097781   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.097856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.098197   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:33.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:33.598550   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.598869   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.097577   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.097658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.097965   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.597894   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.597969   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:35.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.098046   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.098335   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:35.098382   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:35.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.597795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.598375   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.597753   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.597820   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.598067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.597846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.598191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:37.598248   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:38.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.097981   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.098280   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:38.597967   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.598047   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.598406   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.098350   48520 type.go:168] "Request Body" body=""
	I1205 06:29:39.098430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:39.098781   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.598588   48520 node_ready.go:38] duration metric: took 6m0.001106708s for node "functional-101526" to be "Ready" ...
	I1205 06:29:39.600415   48520 out.go:203] 
	W1205 06:29:39.601638   48520 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:29:39.601661   48520 out.go:285] * 
	W1205 06:29:39.603936   48520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:29:39.604891   48520 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:29:47 functional-101526 containerd[5817]: time="2025-12-05T06:29:47.152273988Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:48 functional-101526 containerd[5817]: time="2025-12-05T06:29:48.113692449Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 05 06:29:48 functional-101526 containerd[5817]: time="2025-12-05T06:29:48.115895789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 05 06:29:48 functional-101526 containerd[5817]: time="2025-12-05T06:29:48.124136605Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:48 functional-101526 containerd[5817]: time="2025-12-05T06:29:48.124626063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.066473548Z" level=info msg="No images store for sha256:a9ca98d5566ed58a7d480e0b547a763d077f5729130098d82d4323899cd8629c"
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.068671169Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-101526\""
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.079777438Z" level=info msg="ImageCreate event name:\"sha256:da10500e63c801b54da78f8674131cdf4c08048aa0546512b5c303fbd1d46fc4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.080191957Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-101526\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.868471490Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.871112890Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.874206308Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.885755494Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.925762832Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.928324535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.935676430Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.936150864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.959895095Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.962193960Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.964242689Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.972002933Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 05 06:29:51 functional-101526 containerd[5817]: time="2025-12-05T06:29:51.100203036Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 05 06:29:51 functional-101526 containerd[5817]: time="2025-12-05T06:29:51.102505036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 05 06:29:51 functional-101526 containerd[5817]: time="2025-12-05T06:29:51.110296271Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:51 functional-101526 containerd[5817]: time="2025-12-05T06:29:51.110594687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:29:52.810785    9780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:52.812225    9780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:52.813083    9780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:52.814804    9780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:52.815105    9780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:29:52 up  1:12,  0 user,  load average: 0.40, 0.30, 0.52
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:29:49 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:50 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 05 06:29:50 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:50 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:50 functional-101526 kubelet[9535]: E1205 06:29:50.145358    9535 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:50 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:50 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:50 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 05 06:29:50 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:50 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:50 functional-101526 kubelet[9612]: E1205 06:29:50.918514    9612 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:50 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:50 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:51 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 05 06:29:51 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:51 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:51 functional-101526 kubelet[9674]: E1205 06:29:51.661350    9674 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:51 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:51 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:52 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 05 06:29:52 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:52 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:52 functional-101526 kubelet[9695]: E1205 06:29:52.423717    9695 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:52 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:52 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (389.089373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-101526 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-101526 get pods: exit status 1 (124.972438ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-101526 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (300.939287ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-226068 image ls --format short --alsologtostderr                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format yaml --alsologtostderr                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format json --alsologtostderr                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format table --alsologtostderr                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ ssh     │ functional-226068 ssh pgrep buildkitd                                                                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ image   │ functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr                                                  │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls                                                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ delete  │ -p functional-226068                                                                                                                                    │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ start   │ -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ start   │ -p functional-101526 --alsologtostderr -v=8                                                                                                             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:latest                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add minikube-local-cache-test:functional-101526                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache delete minikube-local-cache-test:functional-101526                                                                              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl images                                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	│ cache   │ functional-101526 cache reload                                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ kubectl │ functional-101526 kubectl -- --context functional-101526 get pods                                                                                       │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:23:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:23:34.555640   48520 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:23:34.555757   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.555768   48520 out.go:374] Setting ErrFile to fd 2...
	I1205 06:23:34.555773   48520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:23:34.556051   48520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:23:34.556413   48520 out.go:368] Setting JSON to false
	I1205 06:23:34.557238   48520 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3961,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:23:34.557311   48520 start.go:143] virtualization:  
	I1205 06:23:34.559039   48520 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:23:34.560249   48520 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:23:34.560305   48520 notify.go:221] Checking for updates...
	I1205 06:23:34.562854   48520 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:23:34.564039   48520 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:34.565137   48520 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:23:34.566333   48520 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:23:34.567598   48520 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:23:34.569245   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:34.569354   48520 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:23:34.590301   48520 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:23:34.590415   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.653386   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.643338894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.653494   48520 docker.go:319] overlay module found
	I1205 06:23:34.655010   48520 out.go:179] * Using the docker driver based on existing profile
	I1205 06:23:34.656153   48520 start.go:309] selected driver: docker
	I1205 06:23:34.656167   48520 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.656269   48520 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:23:34.656363   48520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:23:34.713521   48520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:23:34.704040472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:23:34.713916   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:34.713979   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:34.714025   48520 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:34.715459   48520 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:23:34.716546   48520 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:23:34.717743   48520 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:23:34.719027   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:34.719180   48520 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:23:34.738218   48520 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:23:34.738240   48520 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:23:34.779237   48520 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:23:34.998431   48520 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:23:34.998624   48520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:23:34.998714   48520 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998796   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:23:34.998805   48520 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.154µs
	I1205 06:23:34.998818   48520 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:23:34.998828   48520 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998857   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:23:34.998862   48520 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.504µs
	I1205 06:23:34.998868   48520 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998878   48520 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998890   48520 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:23:34.998904   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:23:34.998909   48520 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 32.361µs
	I1205 06:23:34.998916   48520 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998919   48520 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998925   48520 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.998953   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:23:34.998955   48520 start.go:364] duration metric: took 23.967µs to acquireMachinesLock for "functional-101526"
	I1205 06:23:34.998958   48520 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.961µs
	I1205 06:23:34.998965   48520 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:23:34.998968   48520 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:23:34.998973   48520 fix.go:54] fixHost starting: 
	I1205 06:23:34.998973   48520 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999001   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:23:34.999006   48520 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.323µs
	I1205 06:23:34.999012   48520 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:23:34.999020   48520 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999055   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:23:34.999060   48520 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.108µs
	I1205 06:23:34.999066   48520 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:23:34.999076   48520 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999117   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:23:34.999122   48520 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.426µs
	I1205 06:23:34.999127   48520 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:23:34.999135   48520 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:23:34.999162   48520 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:23:34.999167   48520 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.427µs
	I1205 06:23:34.999172   48520 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:23:34.999180   48520 cache.go:87] Successfully saved all images to host disk.
	I1205 06:23:34.999246   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:35.021908   48520 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:23:35.021948   48520 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:23:35.023534   48520 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:23:35.023573   48520 machine.go:94] provisionDockerMachine start ...
	I1205 06:23:35.023662   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.041007   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.041395   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.041419   48520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:23:35.188597   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.188620   48520 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:23:35.188686   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.205143   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.205585   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.205604   48520 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:23:35.361531   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:23:35.361628   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.381210   48520 main.go:143] libmachine: Using SSH client type: native
	I1205 06:23:35.381606   48520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:23:35.381630   48520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:23:35.529415   48520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:23:35.529441   48520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:23:35.529467   48520 ubuntu.go:190] setting up certificates
	I1205 06:23:35.529477   48520 provision.go:84] configureAuth start
	I1205 06:23:35.529543   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:35.549800   48520 provision.go:143] copyHostCerts
	I1205 06:23:35.549840   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549879   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:23:35.549910   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:23:35.549992   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:23:35.550081   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550102   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:23:35.550111   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:23:35.550138   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:23:35.550192   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550212   48520 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:23:35.550220   48520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:23:35.550244   48520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:23:35.550303   48520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:23:35.896062   48520 provision.go:177] copyRemoteCerts
	I1205 06:23:35.896131   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:23:35.896172   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:35.915295   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.022077   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:23:36.022150   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:23:36.041535   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:23:36.041647   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:23:36.060235   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:23:36.060320   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:23:36.078423   48520 provision.go:87] duration metric: took 548.924199ms to configureAuth
	I1205 06:23:36.078451   48520 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:23:36.078638   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:36.078652   48520 machine.go:97] duration metric: took 1.055064213s to provisionDockerMachine
	I1205 06:23:36.078660   48520 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:23:36.078671   48520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:23:36.078720   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:23:36.078768   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.096049   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.200907   48520 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:23:36.204162   48520 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:23:36.204182   48520 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:23:36.204187   48520 command_runner.go:130] > VERSION_ID="12"
	I1205 06:23:36.204192   48520 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:23:36.204196   48520 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:23:36.204200   48520 command_runner.go:130] > ID=debian
	I1205 06:23:36.204205   48520 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:23:36.204210   48520 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:23:36.204232   48520 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:23:36.204297   48520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:23:36.204316   48520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:23:36.204326   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:23:36.204380   48520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:23:36.204473   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:23:36.204485   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /etc/ssl/certs/41922.pem
	I1205 06:23:36.204565   48520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:23:36.204573   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> /etc/test/nested/copy/4192/hosts
	I1205 06:23:36.204620   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:23:36.211988   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:36.229308   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:23:36.246073   48520 start.go:296] duration metric: took 167.399532ms for postStartSetup
	I1205 06:23:36.246163   48520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:23:36.246202   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.262461   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.366102   48520 command_runner.go:130] > 13%
	I1205 06:23:36.366647   48520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:23:36.370745   48520 command_runner.go:130] > 169G
	I1205 06:23:36.371285   48520 fix.go:56] duration metric: took 1.372308275s for fixHost
	I1205 06:23:36.371306   48520 start.go:83] releasing machines lock for "functional-101526", held for 1.37234313s
	I1205 06:23:36.371420   48520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:23:36.390415   48520 ssh_runner.go:195] Run: cat /version.json
	I1205 06:23:36.390468   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.391053   48520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:23:36.391113   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:36.419642   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.424516   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:36.520794   48520 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:23:36.520923   48520 ssh_runner.go:195] Run: systemctl --version
	I1205 06:23:36.606649   48520 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:23:36.609416   48520 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:23:36.609453   48520 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:23:36.609534   48520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:23:36.613918   48520 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:23:36.613964   48520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:23:36.614023   48520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:23:36.621686   48520 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:23:36.621710   48520 start.go:496] detecting cgroup driver to use...
	I1205 06:23:36.621769   48520 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:23:36.621841   48520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:23:36.637331   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:23:36.650267   48520 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:23:36.650327   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:23:36.665934   48520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:23:36.679279   48520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:23:36.785775   48520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:23:36.894469   48520 docker.go:234] disabling docker service ...
	I1205 06:23:36.894545   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:23:36.910313   48520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:23:36.923239   48520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:23:37.033287   48520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:23:37.168163   48520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:23:37.180578   48520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:23:37.193942   48520 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1205 06:23:37.194023   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:23:37.202471   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:23:37.211003   48520 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:23:37.211119   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:23:37.219839   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.228562   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:23:37.237276   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:23:37.245970   48520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:23:37.253895   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:23:37.262450   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:23:37.271505   48520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:23:37.280464   48520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:23:37.287174   48520 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:23:37.288154   48520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:23:37.295694   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.408389   48520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:23:37.517122   48520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:23:37.517255   48520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:23:37.521337   48520 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1205 06:23:37.521369   48520 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:23:37.521389   48520 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1205 06:23:37.521397   48520 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:37.521404   48520 command_runner.go:130] > Access: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521409   48520 command_runner.go:130] > Modify: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521418   48520 command_runner.go:130] > Change: 2025-12-05 06:23:37.489109762 +0000
	I1205 06:23:37.521422   48520 command_runner.go:130] >  Birth: -
	I1205 06:23:37.521666   48520 start.go:564] Will wait 60s for crictl version
	I1205 06:23:37.521723   48520 ssh_runner.go:195] Run: which crictl
	I1205 06:23:37.524716   48520 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:23:37.525219   48520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:23:37.548325   48520 command_runner.go:130] > Version:  0.1.0
	I1205 06:23:37.548510   48520 command_runner.go:130] > RuntimeName:  containerd
	I1205 06:23:37.548666   48520 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1205 06:23:37.548827   48520 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:23:37.551185   48520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:23:37.551250   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.571456   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.573276   48520 ssh_runner.go:195] Run: containerd --version
	I1205 06:23:37.591907   48520 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1205 06:23:37.597675   48520 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:23:37.598882   48520 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:23:37.617416   48520 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:23:37.621349   48520 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:23:37.621511   48520 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:23:37.621626   48520 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:23:37.621687   48520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:23:37.643465   48520 command_runner.go:130] > {
	I1205 06:23:37.643493   48520 command_runner.go:130] >   "images":  [
	I1205 06:23:37.643498   48520 command_runner.go:130] >     {
	I1205 06:23:37.643515   48520 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:23:37.643522   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643527   48520 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:23:37.643531   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643535   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643540   48520 command_runner.go:130] >       "size":  "8032639",
	I1205 06:23:37.643545   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643549   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643552   48520 command_runner.go:130] >     },
	I1205 06:23:37.643566   48520 command_runner.go:130] >     {
	I1205 06:23:37.643574   48520 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:23:37.643578   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643583   48520 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:23:37.643586   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643591   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643597   48520 command_runner.go:130] >       "size":  "21166088",
	I1205 06:23:37.643601   48520 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:23:37.643605   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643608   48520 command_runner.go:130] >     },
	I1205 06:23:37.643611   48520 command_runner.go:130] >     {
	I1205 06:23:37.643618   48520 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:23:37.643622   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643627   48520 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:23:37.643630   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643634   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643638   48520 command_runner.go:130] >       "size":  "21134420",
	I1205 06:23:37.643642   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643645   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643648   48520 command_runner.go:130] >       },
	I1205 06:23:37.643652   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643656   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643660   48520 command_runner.go:130] >     },
	I1205 06:23:37.643663   48520 command_runner.go:130] >     {
	I1205 06:23:37.643670   48520 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:23:37.643674   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643687   48520 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:23:37.643693   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643698   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643703   48520 command_runner.go:130] >       "size":  "24676285",
	I1205 06:23:37.643707   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643715   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643719   48520 command_runner.go:130] >       },
	I1205 06:23:37.643727   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643734   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643737   48520 command_runner.go:130] >     },
	I1205 06:23:37.643740   48520 command_runner.go:130] >     {
	I1205 06:23:37.643747   48520 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:23:37.643750   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643756   48520 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:23:37.643759   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643763   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643767   48520 command_runner.go:130] >       "size":  "20658969",
	I1205 06:23:37.643771   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643783   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643790   48520 command_runner.go:130] >       },
	I1205 06:23:37.643794   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643798   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643800   48520 command_runner.go:130] >     },
	I1205 06:23:37.643804   48520 command_runner.go:130] >     {
	I1205 06:23:37.643811   48520 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:23:37.643817   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643822   48520 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:23:37.643826   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643830   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643835   48520 command_runner.go:130] >       "size":  "22428165",
	I1205 06:23:37.643840   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643844   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643853   48520 command_runner.go:130] >     },
	I1205 06:23:37.643856   48520 command_runner.go:130] >     {
	I1205 06:23:37.643863   48520 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:23:37.643867   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.643873   48520 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:23:37.643878   48520 command_runner.go:130] >       ],
	I1205 06:23:37.643887   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.643893   48520 command_runner.go:130] >       "size":  "15389290",
	I1205 06:23:37.643900   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.643905   48520 command_runner.go:130] >         "value":  "0"
	I1205 06:23:37.643908   48520 command_runner.go:130] >       },
	I1205 06:23:37.643911   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.643915   48520 command_runner.go:130] >       "pinned":  false
	I1205 06:23:37.643918   48520 command_runner.go:130] >     },
	I1205 06:23:37.643921   48520 command_runner.go:130] >     {
	I1205 06:23:37.644021   48520 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:23:37.644028   48520 command_runner.go:130] >       "repoTags":  [
	I1205 06:23:37.644033   48520 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:23:37.644036   48520 command_runner.go:130] >       ],
	I1205 06:23:37.644041   48520 command_runner.go:130] >       "repoDigests":  [],
	I1205 06:23:37.644045   48520 command_runner.go:130] >       "size":  "265458",
	I1205 06:23:37.644049   48520 command_runner.go:130] >       "uid":  {
	I1205 06:23:37.644056   48520 command_runner.go:130] >         "value":  "65535"
	I1205 06:23:37.644060   48520 command_runner.go:130] >       },
	I1205 06:23:37.644064   48520 command_runner.go:130] >       "username":  "",
	I1205 06:23:37.644075   48520 command_runner.go:130] >       "pinned":  true
	I1205 06:23:37.644078   48520 command_runner.go:130] >     }
	I1205 06:23:37.644081   48520 command_runner.go:130] >   ]
	I1205 06:23:37.644084   48520 command_runner.go:130] > }
	I1205 06:23:37.646462   48520 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:23:37.646482   48520 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:23:37.646489   48520 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:23:37.646588   48520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:23:37.646657   48520 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:23:37.674707   48520 command_runner.go:130] > {
	I1205 06:23:37.674726   48520 command_runner.go:130] >   "cniconfig": {
	I1205 06:23:37.674732   48520 command_runner.go:130] >     "Networks": [
	I1205 06:23:37.674735   48520 command_runner.go:130] >       {
	I1205 06:23:37.674741   48520 command_runner.go:130] >         "Config": {
	I1205 06:23:37.674745   48520 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1205 06:23:37.674752   48520 command_runner.go:130] >           "Name": "cni-loopback",
	I1205 06:23:37.674757   48520 command_runner.go:130] >           "Plugins": [
	I1205 06:23:37.674761   48520 command_runner.go:130] >             {
	I1205 06:23:37.674765   48520 command_runner.go:130] >               "Network": {
	I1205 06:23:37.674769   48520 command_runner.go:130] >                 "ipam": {},
	I1205 06:23:37.674775   48520 command_runner.go:130] >                 "type": "loopback"
	I1205 06:23:37.674779   48520 command_runner.go:130] >               },
	I1205 06:23:37.674785   48520 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1205 06:23:37.674788   48520 command_runner.go:130] >             }
	I1205 06:23:37.674792   48520 command_runner.go:130] >           ],
	I1205 06:23:37.674802   48520 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1205 06:23:37.674806   48520 command_runner.go:130] >         },
	I1205 06:23:37.674813   48520 command_runner.go:130] >         "IFName": "lo"
	I1205 06:23:37.674816   48520 command_runner.go:130] >       }
	I1205 06:23:37.674820   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674825   48520 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1205 06:23:37.674829   48520 command_runner.go:130] >     "PluginDirs": [
	I1205 06:23:37.674832   48520 command_runner.go:130] >       "/opt/cni/bin"
	I1205 06:23:37.674836   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674840   48520 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1205 06:23:37.674844   48520 command_runner.go:130] >     "Prefix": "eth"
	I1205 06:23:37.674846   48520 command_runner.go:130] >   },
	I1205 06:23:37.674850   48520 command_runner.go:130] >   "config": {
	I1205 06:23:37.674854   48520 command_runner.go:130] >     "cdiSpecDirs": [
	I1205 06:23:37.674858   48520 command_runner.go:130] >       "/etc/cdi",
	I1205 06:23:37.674862   48520 command_runner.go:130] >       "/var/run/cdi"
	I1205 06:23:37.674871   48520 command_runner.go:130] >     ],
	I1205 06:23:37.674875   48520 command_runner.go:130] >     "cni": {
	I1205 06:23:37.674879   48520 command_runner.go:130] >       "binDir": "",
	I1205 06:23:37.674883   48520 command_runner.go:130] >       "binDirs": [
	I1205 06:23:37.674888   48520 command_runner.go:130] >         "/opt/cni/bin"
	I1205 06:23:37.674891   48520 command_runner.go:130] >       ],
	I1205 06:23:37.674895   48520 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1205 06:23:37.674899   48520 command_runner.go:130] >       "confTemplate": "",
	I1205 06:23:37.674903   48520 command_runner.go:130] >       "ipPref": "",
	I1205 06:23:37.674907   48520 command_runner.go:130] >       "maxConfNum": 1,
	I1205 06:23:37.674911   48520 command_runner.go:130] >       "setupSerially": false,
	I1205 06:23:37.674916   48520 command_runner.go:130] >       "useInternalLoopback": false
	I1205 06:23:37.674919   48520 command_runner.go:130] >     },
	I1205 06:23:37.674927   48520 command_runner.go:130] >     "containerd": {
	I1205 06:23:37.674932   48520 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1205 06:23:37.674937   48520 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1205 06:23:37.674942   48520 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1205 06:23:37.674946   48520 command_runner.go:130] >       "runtimes": {
	I1205 06:23:37.674950   48520 command_runner.go:130] >         "runc": {
	I1205 06:23:37.674955   48520 command_runner.go:130] >           "ContainerAnnotations": null,
	I1205 06:23:37.674959   48520 command_runner.go:130] >           "PodAnnotations": null,
	I1205 06:23:37.674965   48520 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1205 06:23:37.674969   48520 command_runner.go:130] >           "cgroupWritable": false,
	I1205 06:23:37.674974   48520 command_runner.go:130] >           "cniConfDir": "",
	I1205 06:23:37.674978   48520 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1205 06:23:37.674982   48520 command_runner.go:130] >           "io_type": "",
	I1205 06:23:37.674986   48520 command_runner.go:130] >           "options": {
	I1205 06:23:37.674990   48520 command_runner.go:130] >             "BinaryName": "",
	I1205 06:23:37.674994   48520 command_runner.go:130] >             "CriuImagePath": "",
	I1205 06:23:37.674998   48520 command_runner.go:130] >             "CriuWorkPath": "",
	I1205 06:23:37.675002   48520 command_runner.go:130] >             "IoGid": 0,
	I1205 06:23:37.675006   48520 command_runner.go:130] >             "IoUid": 0,
	I1205 06:23:37.675011   48520 command_runner.go:130] >             "NoNewKeyring": false,
	I1205 06:23:37.675018   48520 command_runner.go:130] >             "Root": "",
	I1205 06:23:37.675022   48520 command_runner.go:130] >             "ShimCgroup": "",
	I1205 06:23:37.675026   48520 command_runner.go:130] >             "SystemdCgroup": false
	I1205 06:23:37.675030   48520 command_runner.go:130] >           },
	I1205 06:23:37.675035   48520 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1205 06:23:37.675042   48520 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1205 06:23:37.675046   48520 command_runner.go:130] >           "runtimePath": "",
	I1205 06:23:37.675051   48520 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1205 06:23:37.675055   48520 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1205 06:23:37.675059   48520 command_runner.go:130] >           "snapshotter": ""
	I1205 06:23:37.675062   48520 command_runner.go:130] >         }
	I1205 06:23:37.675065   48520 command_runner.go:130] >       }
	I1205 06:23:37.675068   48520 command_runner.go:130] >     },
	I1205 06:23:37.675077   48520 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1205 06:23:37.675082   48520 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1205 06:23:37.675087   48520 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1205 06:23:37.675091   48520 command_runner.go:130] >     "disableApparmor": false,
	I1205 06:23:37.675096   48520 command_runner.go:130] >     "disableHugetlbController": true,
	I1205 06:23:37.675100   48520 command_runner.go:130] >     "disableProcMount": false,
	I1205 06:23:37.675104   48520 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1205 06:23:37.675108   48520 command_runner.go:130] >     "enableCDI": true,
	I1205 06:23:37.675112   48520 command_runner.go:130] >     "enableSelinux": false,
	I1205 06:23:37.675117   48520 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1205 06:23:37.675121   48520 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1205 06:23:37.675126   48520 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1205 06:23:37.675131   48520 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1205 06:23:37.675135   48520 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1205 06:23:37.675139   48520 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1205 06:23:37.675144   48520 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1205 06:23:37.675150   48520 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675154   48520 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1205 06:23:37.675159   48520 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1205 06:23:37.675164   48520 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1205 06:23:37.675172   48520 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1205 06:23:37.675176   48520 command_runner.go:130] >   },
	I1205 06:23:37.675179   48520 command_runner.go:130] >   "features": {
	I1205 06:23:37.675184   48520 command_runner.go:130] >     "supplemental_groups_policy": true
	I1205 06:23:37.675187   48520 command_runner.go:130] >   },
	I1205 06:23:37.675190   48520 command_runner.go:130] >   "golang": "go1.24.9",
	I1205 06:23:37.675201   48520 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675211   48520 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1205 06:23:37.675215   48520 command_runner.go:130] >   "runtimeHandlers": [
	I1205 06:23:37.675218   48520 command_runner.go:130] >     {
	I1205 06:23:37.675222   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675227   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675231   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675234   48520 command_runner.go:130] >       }
	I1205 06:23:37.675237   48520 command_runner.go:130] >     },
	I1205 06:23:37.675240   48520 command_runner.go:130] >     {
	I1205 06:23:37.675244   48520 command_runner.go:130] >       "features": {
	I1205 06:23:37.675249   48520 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1205 06:23:37.675253   48520 command_runner.go:130] >         "user_namespaces": true
	I1205 06:23:37.675257   48520 command_runner.go:130] >       },
	I1205 06:23:37.675261   48520 command_runner.go:130] >       "name": "runc"
	I1205 06:23:37.675264   48520 command_runner.go:130] >     }
	I1205 06:23:37.675267   48520 command_runner.go:130] >   ],
	I1205 06:23:37.675270   48520 command_runner.go:130] >   "status": {
	I1205 06:23:37.675273   48520 command_runner.go:130] >     "conditions": [
	I1205 06:23:37.675277   48520 command_runner.go:130] >       {
	I1205 06:23:37.675280   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675284   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675288   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675292   48520 command_runner.go:130] >         "type": "RuntimeReady"
	I1205 06:23:37.675295   48520 command_runner.go:130] >       },
	I1205 06:23:37.675298   48520 command_runner.go:130] >       {
	I1205 06:23:37.675304   48520 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1205 06:23:37.675312   48520 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1205 06:23:37.675316   48520 command_runner.go:130] >         "status": false,
	I1205 06:23:37.675320   48520 command_runner.go:130] >         "type": "NetworkReady"
	I1205 06:23:37.675323   48520 command_runner.go:130] >       },
	I1205 06:23:37.675326   48520 command_runner.go:130] >       {
	I1205 06:23:37.675330   48520 command_runner.go:130] >         "message": "",
	I1205 06:23:37.675334   48520 command_runner.go:130] >         "reason": "",
	I1205 06:23:37.675338   48520 command_runner.go:130] >         "status": true,
	I1205 06:23:37.675343   48520 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1205 06:23:37.675347   48520 command_runner.go:130] >       }
	I1205 06:23:37.675350   48520 command_runner.go:130] >     ]
	I1205 06:23:37.675353   48520 command_runner.go:130] >   }
	I1205 06:23:37.675356   48520 command_runner.go:130] > }
	I1205 06:23:37.675685   48520 cni.go:84] Creating CNI manager for ""
	I1205 06:23:37.675695   48520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:23:37.675709   48520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:23:37.675732   48520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:23:37.675850   48520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:23:37.675917   48520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:23:37.682806   48520 command_runner.go:130] > kubeadm
	I1205 06:23:37.682826   48520 command_runner.go:130] > kubectl
	I1205 06:23:37.682831   48520 command_runner.go:130] > kubelet
	I1205 06:23:37.683692   48520 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:23:37.683790   48520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:23:37.691316   48520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:23:37.703871   48520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:23:37.716284   48520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 06:23:37.728952   48520 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:23:37.732950   48520 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:23:37.733083   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:37.845498   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:37.867115   48520 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:23:37.867139   48520 certs.go:195] generating shared ca certs ...
	I1205 06:23:37.867158   48520 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:37.867407   48520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:23:37.867492   48520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:23:37.867536   48520 certs.go:257] generating profile certs ...
	I1205 06:23:37.867696   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:23:37.867788   48520 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:23:37.867863   48520 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:23:37.867878   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:23:37.867909   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:23:37.867937   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:23:37.867957   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:23:37.867990   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:23:37.868021   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:23:37.868041   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:23:37.868082   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:23:37.868158   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:23:37.868216   48520 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:23:37.868231   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:23:37.868276   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:23:37.868325   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:23:37.868373   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:23:37.868453   48520 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:23:37.868510   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:37.868541   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem -> /usr/share/ca-certificates/4192.pem
	I1205 06:23:37.868568   48520 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> /usr/share/ca-certificates/41922.pem
	I1205 06:23:37.869214   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:23:37.888705   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:23:37.907292   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:23:37.928487   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:23:37.946435   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:23:37.964299   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:23:37.982113   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:23:37.999555   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:23:38.025054   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:23:38.044579   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:23:38.064934   48520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:23:38.085119   48520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:23:38.098666   48520 ssh_runner.go:195] Run: openssl version
	I1205 06:23:38.104661   48520 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:23:38.105114   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.112530   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:23:38.119940   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123892   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.123985   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.124059   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:23:38.164658   48520 command_runner.go:130] > 51391683
	I1205 06:23:38.165135   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:23:38.172385   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.179652   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:23:38.187250   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190908   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190946   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.190996   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:23:38.231356   48520 command_runner.go:130] > 3ec20f2e
	I1205 06:23:38.231428   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:23:38.238676   48520 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.245835   48520 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:23:38.252946   48520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256642   48520 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256892   48520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.256951   48520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:23:38.296975   48520 command_runner.go:130] > b5213941
	I1205 06:23:38.297434   48520 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:23:38.304845   48520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308564   48520 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:23:38.308587   48520 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:23:38.308594   48520 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1205 06:23:38.308601   48520 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:23:38.308607   48520 command_runner.go:130] > Access: 2025-12-05 06:19:31.018816392 +0000
	I1205 06:23:38.308612   48520 command_runner.go:130] > Modify: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308618   48520 command_runner.go:130] > Change: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308623   48520 command_runner.go:130] >  Birth: 2025-12-05 06:15:26.988548566 +0000
	I1205 06:23:38.308692   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:23:38.348984   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.349475   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:23:38.394714   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.395243   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:23:38.435818   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.436261   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:23:38.476805   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.477267   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:23:38.518071   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.518611   48520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:23:38.561014   48520 command_runner.go:130] > Certificate will not expire
	I1205 06:23:38.561491   48520 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:23:38.561574   48520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:23:38.561660   48520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:23:38.588277   48520 cri.go:89] found id: ""
	I1205 06:23:38.588366   48520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:23:38.596406   48520 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:23:38.596430   48520 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:23:38.596438   48520 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:23:38.597543   48520 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:23:38.597605   48520 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:23:38.597685   48520 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:23:38.607655   48520 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:23:38.608093   48520 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.608241   48520 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "functional-101526" cluster setting kubeconfig missing "functional-101526" context setting]
	I1205 06:23:38.608622   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.609091   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.609324   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.609886   48520 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:23:38.610063   48520 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:23:38.610057   48520 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:23:38.610120   48520 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:23:38.610139   48520 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:23:38.610175   48520 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:23:38.610495   48520 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:23:38.619299   48520 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:23:38.619367   48520 kubeadm.go:602] duration metric: took 21.74243ms to restartPrimaryControlPlane
	I1205 06:23:38.619392   48520 kubeadm.go:403] duration metric: took 57.910865ms to StartCluster
	I1205 06:23:38.619420   48520 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.619502   48520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.620189   48520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:23:38.620458   48520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 06:23:38.620608   48520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:23:38.620940   48520 addons.go:70] Setting storage-provisioner=true in profile "functional-101526"
	I1205 06:23:38.621064   48520 addons.go:239] Setting addon storage-provisioner=true in "functional-101526"
	I1205 06:23:38.621113   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.620703   48520 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:23:38.621254   48520 addons.go:70] Setting default-storageclass=true in profile "functional-101526"
	I1205 06:23:38.621267   48520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-101526"
	I1205 06:23:38.621543   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.621837   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.622827   48520 out.go:179] * Verifying Kubernetes components...
	I1205 06:23:38.624023   48520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:23:38.667927   48520 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:23:38.668094   48520 kapi.go:59] client config for functional-101526: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:23:38.668372   48520 addons.go:239] Setting addon default-storageclass=true in "functional-101526"
	I1205 06:23:38.668400   48520 host.go:66] Checking if "functional-101526" exists ...
	I1205 06:23:38.668811   48520 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:23:38.682967   48520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:23:38.684152   48520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.684170   48520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:23:38.684236   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.712186   48520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:38.712208   48520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:23:38.712271   48520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:23:38.728758   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.759681   48520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:23:38.830869   48520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:23:38.880502   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:38.894150   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.597389   48520 node_ready.go:35] waiting up to 6m0s for node "functional-101526" to be "Ready" ...
	I1205 06:23:39.597462   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597505   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597540   48520 retry.go:31] will retry after 347.041569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597590   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.597614   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597624   48520 retry.go:31] will retry after 291.359395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:23:39.597730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:39.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:39.889264   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:39.945727   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:39.950448   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:39.950487   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:39.950523   48520 retry.go:31] will retry after 542.352885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018611   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.018720   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.018748   48520 retry.go:31] will retry after 498.666832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.098033   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.098325   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.493962   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:40.518418   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:40.562108   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.562226   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.562260   48520 retry.go:31] will retry after 406.138698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588025   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:40.588062   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.588081   48520 retry.go:31] will retry after 594.532888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:40.598248   48520 type.go:168] "Request Body" body=""
	I1205 06:23:40.598327   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:40.598636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:40.969306   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.034172   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.037396   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.037482   48520 retry.go:31] will retry after 875.411269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.098568   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.098689   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.098986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:41.183391   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:41.246665   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.246713   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.246732   48520 retry.go:31] will retry after 928.241992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.598231   48520 type.go:168] "Request Body" body=""
	I1205 06:23:41.598321   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:41.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:41.598695   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:41.913216   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:41.971936   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:41.975346   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:41.975382   48520 retry.go:31] will retry after 1.177811903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:42.175570   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:42.247042   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:42.247165   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.247197   48520 retry.go:31] will retry after 1.26909991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:42.598419   48520 type.go:168] "Request Body" body=""
	I1205 06:23:42.598544   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:42.598893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.097717   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.098051   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:43.154349   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:43.214165   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.217853   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.217885   48520 retry.go:31] will retry after 2.752289429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.517328   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:43.580346   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:43.580405   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.580434   48520 retry.go:31] will retry after 2.299289211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:43.598503   48520 type.go:168] "Request Body" body=""
	I1205 06:23:43.598628   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:43.598995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:43.599083   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:44.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.098502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.098803   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:44.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:23:44.597856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:44.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.097813   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.097918   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.098342   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.597661   48520 type.go:168] "Request Body" body=""
	I1205 06:23:45.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:45.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:45.880606   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:45.938914   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:45.938948   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.938966   48520 retry.go:31] will retry after 2.215203034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:45.971116   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:46.035840   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:46.035877   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.035895   48520 retry.go:31] will retry after 2.493998942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:46.098074   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.098239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.098559   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:46.098611   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:46.598405   48520 type.go:168] "Request Body" body=""
	I1205 06:23:46.598501   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:46.598815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.098358   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.098432   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.098766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:47.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:23:47.598407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:47.598667   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:48.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.098899   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:48.098950   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:48.155209   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:48.214464   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.214512   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.214531   48520 retry.go:31] will retry after 5.617095307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.530967   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:48.587770   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:48.587811   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.587831   48520 retry.go:31] will retry after 3.714896929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:48.598174   48520 type.go:168] "Request Body" body=""
	I1205 06:23:48.598240   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:48.598490   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.098439   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.098511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:49.597635   48520 type.go:168] "Request Body" body=""
	I1205 06:23:49.597708   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:49.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.097641   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.098020   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:23:50.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:50.598128   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:50.598177   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:51.097653   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.097726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:51.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:23:51.598434   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:51.598708   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.098476   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.098552   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.098854   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:52.303312   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:52.364380   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:52.367543   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.367573   48520 retry.go:31] will retry after 3.56011918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:52.597990   48520 type.go:168] "Request Body" body=""
	I1205 06:23:52.598059   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:52.598330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:52.598370   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:53.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.097720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.097995   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:23:53.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:53.598131   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:53.832691   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:23:53.932471   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:53.935567   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:53.935601   48520 retry.go:31] will retry after 7.968340753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:54.098032   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.098119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.098504   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:54.598332   48520 type.go:168] "Request Body" body=""
	I1205 06:23:54.598408   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:54.598700   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:54.598750   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:55.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.098636   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:23:55.598452   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:55.598735   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:55.928461   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:23:55.985797   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:23:55.985849   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:55.985868   48520 retry.go:31] will retry after 13.95380646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:23:56.098043   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.098142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:56.598257   48520 type.go:168] "Request Body" body=""
	I1205 06:23:56.598332   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:56.598591   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:57.098338   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.098418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:57.098806   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:23:57.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:23:57.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:57.598727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.098565   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.098653   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.098993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:58.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:23:58.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:58.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:23:59.597995   48520 type.go:168] "Request Body" body=""
	I1205 06:23:59.598071   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:23:59.598388   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:23:59.598441   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:00.097798   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.097895   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.098216   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:00.598109   48520 type.go:168] "Request Body" body=""
	I1205 06:24:00.598187   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:00.598469   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.098232   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.098656   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:01.598378   48520 type.go:168] "Request Body" body=""
	I1205 06:24:01.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:01.598756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:01.598798   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:01.904244   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:01.963282   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:01.966528   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:01.966559   48520 retry.go:31] will retry after 12.949527151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:02.097647   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.098069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:02.597723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:02.597819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:02.598178   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.097745   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.098222   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:03.597893   48520 type.go:168] "Request Body" body=""
	I1205 06:24:03.597959   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:03.598249   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:04.097760   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.098267   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:04.098317   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:04.598025   48520 type.go:168] "Request Body" body=""
	I1205 06:24:04.598124   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:04.598425   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.098484   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.098557   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.098824   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:05.598589   48520 type.go:168] "Request Body" body=""
	I1205 06:24:05.598684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:05.599025   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.098166   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:06.597592   48520 type.go:168] "Request Body" body=""
	I1205 06:24:06.597662   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:06.597933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:06.597973   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:07.098457   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.098530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.098893   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:07.598367   48520 type.go:168] "Request Body" body=""
	I1205 06:24:07.598458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:07.598757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.098344   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.098429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.098757   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:08.598492   48520 type.go:168] "Request Body" body=""
	I1205 06:24:08.598559   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:08.598841   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:08.598881   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:09.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.097973   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.098345   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.598107   48520 type.go:168] "Request Body" body=""
	I1205 06:24:09.598174   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:09.598441   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:09.939938   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:09.995364   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:09.998554   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:09.998588   48520 retry.go:31] will retry after 16.114489594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:10.097931   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.098044   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.098385   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:10.598110   48520 type.go:168] "Request Body" body=""
	I1205 06:24:10.598191   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:10.598513   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:11.098275   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.098615   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:11.098670   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:11.598400   48520 type.go:168] "Request Body" body=""
	I1205 06:24:11.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:11.598740   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.098540   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.098616   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:12.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:24:12.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:12.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.097746   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.097819   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.098163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:13.597759   48520 type.go:168] "Request Body" body=""
	I1205 06:24:13.597834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:13.598122   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:13.598175   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:14.097627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.097709   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.597953   48520 type.go:168] "Request Body" body=""
	I1205 06:24:14.598020   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:14.598324   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:14.916824   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:14.975576   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:14.975628   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:14.975646   48520 retry.go:31] will retry after 12.242306889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:15.097909   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.098005   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.098359   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:15.597934   48520 type.go:168] "Request Body" body=""
	I1205 06:24:15.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:15.598277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:15.598320   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:16.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:16.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:24:16.597791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:16.598100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.098010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:17.597774   48520 type.go:168] "Request Body" body=""
	I1205 06:24:17.597845   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:17.598218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:18.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:18.098183   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:18.598335   48520 type.go:168] "Request Body" body=""
	I1205 06:24:18.598405   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:18.598680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.098583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.098655   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:19.597882   48520 type.go:168] "Request Body" body=""
	I1205 06:24:19.597965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:19.598257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:20.097767   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.097837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.098151   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:20.098210   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:20.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:20.597821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:20.598163   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.097868   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.097944   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:21.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:24:21.597748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:21.597999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:22.097784   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.097863   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:22.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:22.597927   48520 type.go:168] "Request Body" body=""
	I1205 06:24:22.598018   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:22.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:24:23.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:23.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.097757   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.097834   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.098165   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:24.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:24:24.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:24.598412   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:24.598451   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:25.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.097818   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.098201   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:25.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:24:25.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:25.598206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.097628   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.097703   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.097958   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:26.114242   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:26.182245   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:26.182291   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.182309   48520 retry.go:31] will retry after 20.133806896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:26.597729   48520 type.go:168] "Request Body" body=""
	I1205 06:24:26.597815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:27.097723   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:27.098168   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:27.218635   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:27.278311   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:27.278351   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.278369   48520 retry.go:31] will retry after 29.943294063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:27.597675   48520 type.go:168] "Request Body" body=""
	I1205 06:24:27.597766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:27.598047   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.097690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.098089   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:28.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:24:28.597760   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:28.598077   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:29.597938   48520 type.go:168] "Request Body" body=""
	I1205 06:24:29.598028   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:29.598339   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:29.598384   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:30.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.097831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:30.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:24:30.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:30.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.097803   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.098330   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:31.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:24:31.597811   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:31.598159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:32.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:32.098247   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:32.598587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:32.598658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:32.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.097615   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.097683   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.098041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:33.598348   48520 type.go:168] "Request Body" body=""
	I1205 06:24:33.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:33.598685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:34.098505   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.098598   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.098917   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:34.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:34.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:24:34.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:34.598097   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.098294   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:35.598401   48520 type.go:168] "Request Body" body=""
	I1205 06:24:35.598478   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:35.598810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:36.098627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.098700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.099015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:36.099064   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:36.597658   48520 type.go:168] "Request Body" body=""
	I1205 06:24:36.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:36.598106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.098117   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:37.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:24:37.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:37.598093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.098206   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:38.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:24:38.597747   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:38.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:38.598117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:39.097836   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.097928   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.098334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:39.598071   48520 type.go:168] "Request Body" body=""
	I1205 06:24:39.598143   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:39.598413   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.098336   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.098679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:40.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:40.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:40.598808   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:40.598849   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:41.098353   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.098417   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.098669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:41.598525   48520 type.go:168] "Request Body" body=""
	I1205 06:24:41.598609   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:41.598927   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:42.597659   48520 type.go:168] "Request Body" body=""
	I1205 06:24:42.597734   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:42.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:43.097673   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.098074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:43.098136   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:43.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:24:43.597761   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:43.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.098370   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.098629   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:44.598627   48520 type.go:168] "Request Body" body=""
	I1205 06:24:44.598699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:44.599010   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:45.097788   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.097907   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:45.098408   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:45.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:24:45.597740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:45.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.098586   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.098659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.098977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:46.316378   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:24:46.382136   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:46.385605   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.385642   48520 retry.go:31] will retry after 25.45198813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:46.598118   48520 type.go:168] "Request Body" body=""
	I1205 06:24:46.598219   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:46.598522   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:47.098288   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.098354   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.098627   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:47.098682   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:47.598404   48520 type.go:168] "Request Body" body=""
	I1205 06:24:47.598472   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:47.598746   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.098648   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.099013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:48.598372   48520 type.go:168] "Request Body" body=""
	I1205 06:24:48.598439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:48.598709   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:49.098599   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.099061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:49.099113   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:24:49.598014   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:49.598306   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.097691   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:50.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:24:50.598564   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:50.598829   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.097583   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.097659   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.098037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:51.598325   48520 type.go:168] "Request Body" body=""
	I1205 06:24:51.598399   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:51.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:51.598761   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:52.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.098621   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.098978   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:24:52.597773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:52.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.097590   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.097657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.097905   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:53.597594   48520 type.go:168] "Request Body" body=""
	I1205 06:24:53.597666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:53.597973   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:54.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.098071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:54.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:54.597977   48520 type.go:168] "Request Body" body=""
	I1205 06:24:54.598054   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:54.598305   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.097736   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.097821   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:55.598396   48520 type.go:168] "Request Body" body=""
	I1205 06:24:55.598475   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:55.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:56.098321   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.098407   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.098685   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:56.098727   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:56.598502   48520 type.go:168] "Request Body" body=""
	I1205 06:24:56.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:56.598876   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.097587   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.097675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.097966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:57.222289   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:24:57.284849   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:24:57.284890   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.284910   48520 retry.go:31] will retry after 41.469992375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:24:57.598343   48520 type.go:168] "Request Body" body=""
	I1205 06:24:57.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:57.598669   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:58.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.098574   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.098880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:24:58.098930   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:24:58.597606   48520 type.go:168] "Request Body" body=""
	I1205 06:24:58.597675   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:58.598032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.098539   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.098608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.098916   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:24:59.597662   48520 type.go:168] "Request Body" body=""
	I1205 06:24:59.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:24:59.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.097620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.097697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:00.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:25:00.598474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:00.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:00.598791   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:01.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.098690   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.099039   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:01.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:01.597775   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:01.598053   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.098050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:02.597715   48520 type.go:168] "Request Body" body=""
	I1205 06:25:02.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:02.598124   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:03.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.097804   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.098169   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:03.098231   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:03.597623   48520 type.go:168] "Request Body" body=""
	I1205 06:25:03.597691   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:03.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.097739   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.098119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:04.597929   48520 type.go:168] "Request Body" body=""
	I1205 06:25:04.598003   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:04.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:05.098361   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.098426   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:05.098730   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:05.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:05.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:05.598783   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.098625   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.098705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.099060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:06.598355   48520 type.go:168] "Request Body" body=""
	I1205 06:25:06.598425   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:06.598694   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:07.098518   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:07.098978   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:07.597640   48520 type.go:168] "Request Body" body=""
	I1205 06:25:07.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:07.598023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.097648   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.097718   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.098028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:08.597690   48520 type.go:168] "Request Body" body=""
	I1205 06:25:08.597762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:08.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.097853   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:09.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:25:09.598150   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:09.598411   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:09.598454   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:10.097719   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:10.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:25:10.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:10.598121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.097705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.097959   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:25:11.597738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:11.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:11.838548   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:25:11.913959   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914006   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:11.914113   48520 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:12.098377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.098446   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.098756   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:12.098805   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:12.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:25:12.598398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:12.598661   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.098442   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.098525   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:13.598638   48520 type.go:168] "Request Body" body=""
	I1205 06:25:13.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:13.599017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.097669   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.098009   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:14.597973   48520 type.go:168] "Request Body" body=""
	I1205 06:25:14.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:14.598377   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:14.598425   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:15.098092   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.098173   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.098548   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:15.598315   48520 type.go:168] "Request Body" body=""
	I1205 06:25:15.598383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:15.598676   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.098414   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.098500   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.098815   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:16.598530   48520 type.go:168] "Request Body" body=""
	I1205 06:25:16.598606   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:16.598956   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:16.599009   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:17.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.097731   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.098060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:17.597681   48520 type.go:168] "Request Body" body=""
	I1205 06:25:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:17.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.097848   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.097925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.098264   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:18.597980   48520 type.go:168] "Request Body" body=""
	I1205 06:25:18.598079   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:18.598336   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:19.098419   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.098509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.098856   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:19.098915   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:19.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:25:19.597704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:19.598037   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.097977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:20.597733   48520 type.go:168] "Request Body" body=""
	I1205 06:25:20.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:20.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.097758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.098096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:21.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:21.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:21.598679   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:21.598737   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:22.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.098935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:22.597673   48520 type.go:168] "Request Body" body=""
	I1205 06:25:22.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:22.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.098687   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:23.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:25:23.598509   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:23.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:23.598843   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:24.098620   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.098699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.099069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:24.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:25:24.597963   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:24.598230   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.097708   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.097788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.098109   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:25.597831   48520 type.go:168] "Request Body" body=""
	I1205 06:25:25.597926   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:25.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:26.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.097972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:26.098033   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:26.597712   48520 type.go:168] "Request Body" body=""
	I1205 06:25:26.597784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:26.598123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.097823   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.097896   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:27.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:25:27.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:27.597972   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:28.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.098036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:28.098084   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:28.597722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:28.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:28.598154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.097749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.098021   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:29.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:25:29.597987   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:29.598315   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:30.098008   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.098085   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.098479   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:30.098542   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:30.598023   48520 type.go:168] "Request Body" body=""
	I1205 06:25:30.598099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:30.598365   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.097667   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.097739   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.098082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:31.597660   48520 type.go:168] "Request Body" body=""
	I1205 06:25:31.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:31.598050   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.097729   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.097985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:32.597714   48520 type.go:168] "Request Body" body=""
	I1205 06:25:32.597792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:32.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:32.598157   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:33.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.097799   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.098123   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:33.597803   48520 type.go:168] "Request Body" body=""
	I1205 06:25:33.597872   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:33.598133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.097693   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.098121   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:34.598211   48520 type.go:168] "Request Body" body=""
	I1205 06:25:34.598290   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:34.598585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:34.598631   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:35.098390   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.098471   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.098750   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:35.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:35.598657   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:35.598992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.097793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:36.598358   48520 type.go:168] "Request Body" body=""
	I1205 06:25:36.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:36.598693   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:36.598731   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:37.098496   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.098568   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.098894   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:37.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:25:37.598057   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:37.599817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1205 06:25:38.097679   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.098452   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:38.598262   48520 type.go:168] "Request Body" body=""
	I1205 06:25:38.598388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:38.598749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:38.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:38.755357   48520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:25:38.811504   48520 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811556   48520 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:25:38.811634   48520 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:25:38.813895   48520 out.go:179] * Enabled addons: 
	I1205 06:25:38.815272   48520 addons.go:530] duration metric: took 2m0.19467206s for enable addons: enabled=[]
	I1205 06:25:39.097850   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.097947   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.098277   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:25:39.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:39.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.098242   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.098311   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.098643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:40.598377   48520 type.go:168] "Request Body" body=""
	I1205 06:25:40.598451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:40.598717   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:41.098378   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.098451   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:41.098817   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:41.598539   48520 type.go:168] "Request Body" body=""
	I1205 06:25:41.598608   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:41.598921   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.097686   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:42.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:25:42.597727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:42.598041   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.097789   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.097885   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.098205   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:43.597913   48520 type.go:168] "Request Body" body=""
	I1205 06:25:43.597988   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:43.598331   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:43.598385   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:44.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.097735   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.098040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:44.598010   48520 type.go:168] "Request Body" body=""
	I1205 06:25:44.598096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:44.598435   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.098016   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.098099   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.098496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:45.597755   48520 type.go:168] "Request Body" body=""
	I1205 06:25:45.597830   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:45.598148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:46.097840   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.097939   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.098311   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:46.098366   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:46.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:25:46.598111   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:46.598421   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.098155   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.098226   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.098489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:47.598368   48520 type.go:168] "Request Body" body=""
	I1205 06:25:47.598435   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:47.598715   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:48.098525   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.098931   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:48.099014   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:48.598319   48520 type.go:168] "Request Body" body=""
	I1205 06:25:48.598387   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:48.598646   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.098618   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.098694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.099074   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:49.597928   48520 type.go:168] "Request Body" body=""
	I1205 06:25:49.598000   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:49.598344   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.098007   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.098092   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.098397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:50.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:25:50.598202   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:50.598496   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:50.598545   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:51.098285   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.098357   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:51.598330   48520 type.go:168] "Request Body" body=""
	I1205 06:25:51.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:51.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.098404   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.098477   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.098809   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:52.598593   48520 type.go:168] "Request Body" body=""
	I1205 06:25:52.598670   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:52.598948   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:52.598996   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:53.097636   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.097712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:53.597701   48520 type.go:168] "Request Body" body=""
	I1205 06:25:53.597798   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:53.598156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.097890   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.097965   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.098294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:54.598153   48520 type.go:168] "Request Body" body=""
	I1205 06:25:54.598231   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:54.598502   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:55.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.098413   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.098774   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:55.098829   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:55.598579   48520 type.go:168] "Request Body" body=""
	I1205 06:25:55.598649   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:55.598924   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.098373   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.098641   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:56.598457   48520 type.go:168] "Request Body" body=""
	I1205 06:25:56.598530   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:56.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:57.098559   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.098633   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.098928   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:57.098974   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:25:57.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:25:57.598416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:57.598776   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.098490   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.098937   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:58.598427   48520 type.go:168] "Request Body" body=""
	I1205 06:25:58.598511   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:58.598848   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:25:59.597568   48520 type.go:168] "Request Body" body=""
	I1205 06:25:59.597645   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:25:59.597976   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:25:59.598030   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:00.098475   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.098597   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.098940   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:00.598316   48520 type.go:168] "Request Body" body=""
	I1205 06:26:00.598385   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:00.598643   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.098402   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.098479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.098749   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:01.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:01.598588   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:01.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:01.598947   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:02.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.098398   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.098727   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:02.598537   48520 type.go:168] "Request Body" body=""
	I1205 06:26:02.598620   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:02.598964   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.097765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.098104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:03.598364   48520 type.go:168] "Request Body" body=""
	I1205 06:26:03.598437   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:03.598722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:04.098558   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.098639   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.099052   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:04.099124   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:04.597875   48520 type.go:168] "Request Body" body=""
	I1205 06:26:04.597957   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:04.598338   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.097925   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.097993   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:05.597982   48520 type.go:168] "Request Body" body=""
	I1205 06:26:05.598052   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:05.598387   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.098263   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.098343   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.098724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:06.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:26:06.598455   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:06.598714   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:06.598754   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:07.098499   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.098585   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.098898   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:07.597622   48520 type.go:168] "Request Body" body=""
	I1205 06:26:07.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:07.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.097980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:08.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:26:08.597757   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:08.598102   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:09.097870   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.097951   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.098248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:09.098294   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:09.598112   48520 type.go:168] "Request Body" body=""
	I1205 06:26:09.598239   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:09.598574   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.098411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.098493   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.098840   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:10.597577   48520 type.go:168] "Request Body" body=""
	I1205 06:26:10.597650   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:10.597981   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:11.098430   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.098504   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.098767   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:11.098808   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:11.598510   48520 type.go:168] "Request Body" body=""
	I1205 06:26:11.598593   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:11.598863   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.097604   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.097676   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.097998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:12.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:12.597698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:12.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.098093   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:13.597735   48520 type.go:168] "Request Body" body=""
	I1205 06:26:13.597806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:13.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:13.598192   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:14.098341   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.098414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:14.597561   48520 type.go:168] "Request Body" body=""
	I1205 06:26:14.597634   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:14.597953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.097674   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.097789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:15.597858   48520 type.go:168] "Request Body" body=""
	I1205 06:26:15.597937   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:15.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:15.598252   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:16.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.097779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.098136   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:16.597837   48520 type.go:168] "Request Body" body=""
	I1205 06:26:16.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:16.598258   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.097730   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.097795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.098081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:17.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:26:17.597868   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:17.598177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:18.097743   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.097833   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.098180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:18.098240   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:18.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:26:18.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:18.598040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.097951   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.098029   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:19.598124   48520 type.go:168] "Request Body" body=""
	I1205 06:26:19.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:19.598489   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:20.098213   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.098283   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.098535   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:20.098577   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:20.598411   48520 type.go:168] "Request Body" body=""
	I1205 06:26:20.598481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:20.598797   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.098573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.098642   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.098953   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:21.598371   48520 type.go:168] "Request Body" body=""
	I1205 06:26:21.598445   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:21.598703   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:22.098546   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.098626   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.098949   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:22.099002   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:22.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:22.597742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:22.598072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.098292   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.098363   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.098623   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:23.598300   48520 type.go:168] "Request Body" body=""
	I1205 06:26:23.598378   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:23.598681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.098489   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.098570   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.098890   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:24.597874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:24.597942   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:24.598193   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:24.598235   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:25.097954   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.098049   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.098380   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:25.598080   48520 type.go:168] "Request Body" body=""
	I1205 06:26:25.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:25.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.098255   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.098335   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.098599   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:26.598438   48520 type.go:168] "Request Body" body=""
	I1205 06:26:26.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:26.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:26.598819   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:27.098598   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.098666   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.098997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:27.598342   48520 type.go:168] "Request Body" body=""
	I1205 06:26:27.598419   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:27.598674   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.098464   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.098548   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.098911   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:28.597628   48520 type.go:168] "Request Body" body=""
	I1205 06:26:28.597711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:28.598054   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:29.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:29.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:29.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:26:29.598171   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:29.598512   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.098305   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.098722   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:30.598362   48520 type.go:168] "Request Body" body=""
	I1205 06:26:30.598429   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:30.598778   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:31.098515   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.098594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.098941   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:31.099003   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:31.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:26:31.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:31.598069   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.097668   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.097740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:32.597672   48520 type.go:168] "Request Body" body=""
	I1205 06:26:32.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:32.598073   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.098170   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:33.598322   48520 type.go:168] "Request Body" body=""
	I1205 06:26:33.598390   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:33.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:33.598681   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:34.098437   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.098514   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.098910   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:34.597764   48520 type.go:168] "Request Body" body=""
	I1205 06:26:34.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:34.598152   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.097815   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.097898   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:35.597656   48520 type.go:168] "Request Body" body=""
	I1205 06:26:35.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:35.598058   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:36.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.097878   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.098272   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:36.098329   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:36.597583   48520 type.go:168] "Request Body" body=""
	I1205 06:26:36.597647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:36.597901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.097624   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:37.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:26:37.597700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:37.598016   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.097738   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.098029   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:38.597734   48520 type.go:168] "Request Body" body=""
	I1205 06:26:38.597805   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:38.598144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:38.598199   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:39.097874   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.097953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.098299   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:39.598056   48520 type.go:168] "Request Body" body=""
	I1205 06:26:39.598120   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:39.598381   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.098148   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:40.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:40.597922   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:40.598235   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:40.598299   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:41.097613   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.097684   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.097934   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:41.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:41.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:41.598095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.097741   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.098252   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:42.597939   48520 type.go:168] "Request Body" body=""
	I1205 06:26:42.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:42.598259   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:43.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.097781   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.098098   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:43.098152   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:43.597677   48520 type.go:168] "Request Body" body=""
	I1205 06:26:43.597750   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:43.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.097834   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:44.598119   48520 type.go:168] "Request Body" body=""
	I1205 06:26:44.598197   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:44.598510   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:45.098372   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:45.098935   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:45.598338   48520 type.go:168] "Request Body" body=""
	I1205 06:26:45.598404   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:45.598666   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.098497   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.098582   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.098980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:46.597691   48520 type.go:168] "Request Body" body=""
	I1205 06:26:46.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:46.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.098061   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:47.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:26:47.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:47.598104   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:47.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:48.097827   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:48.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:26:48.597728   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:48.597996   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:49.597943   48520 type.go:168] "Request Body" body=""
	I1205 06:26:49.598016   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:49.598353   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:49.598407   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:50.097817   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.097883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:50.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:26:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:50.598116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.098145   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:51.597641   48520 type.go:168] "Request Body" body=""
	I1205 06:26:51.597713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:51.597986   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:52.097689   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.097767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.098130   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:52.098200   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:52.597702   48520 type.go:168] "Request Body" body=""
	I1205 06:26:52.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:52.598147   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.097621   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.097992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:53.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:26:53.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:53.598139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:54.097842   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.097924   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.098290   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:54.098348   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:54.598061   48520 type.go:168] "Request Body" body=""
	I1205 06:26:54.598132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:54.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.097700   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.097773   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:55.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:26:55.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:55.598059   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:56.098320   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.098388   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.098645   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:56.098686   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:56.598516   48520 type.go:168] "Request Body" body=""
	I1205 06:26:56.598594   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:56.598880   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.097600   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.097674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.097997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:57.598327   48520 type.go:168] "Request Body" body=""
	I1205 06:26:57.598395   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:57.598644   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:58.098426   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.098498   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.098810   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:26:58.098866   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:26:58.597573   48520 type.go:168] "Request Body" body=""
	I1205 06:26:58.597644   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:58.597980   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.098351   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.098416   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.098680   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:26:59.597637   48520 type.go:168] "Request Body" body=""
	I1205 06:26:59.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:26:59.598057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:00.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.097897   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.099364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1205 06:27:00.099443   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:00.598194   48520 type.go:168] "Request Body" body=""
	I1205 06:27:00.598268   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:00.598536   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.098258   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.098330   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:01.598444   48520 type.go:168] "Request Body" body=""
	I1205 06:27:01.598519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:01.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.098423   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.098519   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.098885   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:02.597608   48520 type.go:168] "Request Body" body=""
	I1205 06:27:02.597679   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:02.598006   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:02.598063   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:03.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.097784   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.098106   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:03.597631   48520 type.go:168] "Request Body" body=""
	I1205 06:27:03.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:03.598018   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.097701   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.098154   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:04.598083   48520 type.go:168] "Request Body" body=""
	I1205 06:27:04.598155   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:04.598468   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:04.598512   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:05.098189   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.098594   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:05.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:27:05.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:05.598766   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.098531   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.098612   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.098952   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:06.598334   48520 type.go:168] "Request Body" body=""
	I1205 06:27:06.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:06.598739   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:06.598794   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:07.098498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.098572   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.098896   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:07.598506   48520 type.go:168] "Request Body" body=""
	I1205 06:27:07.598576   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:07.598842   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.098375   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.098481   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.098796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:08.598436   48520 type.go:168] "Request Body" body=""
	I1205 06:27:08.598506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:08.598870   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:08.598917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:09.098638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.098713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.099031   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:09.598009   48520 type.go:168] "Request Body" body=""
	I1205 06:27:09.598075   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:09.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.098104   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.098192   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.098576   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:10.598351   48520 type.go:168] "Request Body" body=""
	I1205 06:27:10.598430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:10.598734   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:11.098387   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.098458   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.098711   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:11.098748   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:11.598498   48520 type.go:168] "Request Body" body=""
	I1205 06:27:11.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:11.598845   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.097611   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.097693   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.098027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:12.597653   48520 type.go:168] "Request Body" body=""
	I1205 06:27:12.597725   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:12.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.097714   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:13.597704   48520 type.go:168] "Request Body" body=""
	I1205 06:27:13.597777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:13.598079   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:13.598137   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:14.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.097698   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.098014   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:14.598045   48520 type.go:168] "Request Body" body=""
	I1205 06:27:14.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:14.598450   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.098264   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.098347   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.098689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:15.598341   48520 type.go:168] "Request Body" body=""
	I1205 06:27:15.598409   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:15.598702   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:15.598770   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:16.098516   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.098587   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.098908   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:16.598224   48520 type.go:168] "Request Body" body=""
	I1205 06:27:16.598302   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:16.598621   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.098312   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.098380   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:17.598428   48520 type.go:168] "Request Body" body=""
	I1205 06:27:17.598505   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:17.598807   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:17.598861   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:18.098642   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.098716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.099040   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:18.597649   48520 type.go:168] "Request Body" body=""
	I1205 06:27:18.597719   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:18.598036   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.097919   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.097999   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.098304   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:19.598097   48520 type.go:168] "Request Body" body=""
	I1205 06:27:19.598170   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:19.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:20.098308   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.098383   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.098652   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:20.098698   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:20.598481   48520 type.go:168] "Request Body" body=""
	I1205 06:27:20.598549   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:20.598903   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.097638   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.097710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.097999   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:21.597636   48520 type.go:168] "Request Body" body=""
	I1205 06:27:21.597706   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:21.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.097711   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.097786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.098159   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:22.597596   48520 type.go:168] "Request Body" body=""
	I1205 06:27:22.597665   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:22.597997   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:22.598069   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:23.097646   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.097723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.098062   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:23.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:27:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:23.598043   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.097740   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.097814   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.098175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:24.598060   48520 type.go:168] "Request Body" body=""
	I1205 06:27:24.598131   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:24.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:24.598468   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:25.098259   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.098337   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.098681   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:25.598479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:25.598553   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:25.598817   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.098335   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.098403   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:26.598420   48520 type.go:168] "Request Body" body=""
	I1205 06:27:26.598491   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:26.598790   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:26.598830   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:27.098591   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.098669   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.098998   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:27.597626   48520 type.go:168] "Request Body" body=""
	I1205 06:27:27.597699   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:27.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.097670   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.097748   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.098120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:28.597838   48520 type.go:168] "Request Body" body=""
	I1205 06:27:28.597913   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:28.598248   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:29.097656   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.097724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.097974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:29.098015   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:29.597962   48520 type.go:168] "Request Body" body=""
	I1205 06:27:29.598034   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:29.598397   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.097797   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.098177   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:30.597863   48520 type.go:168] "Request Body" body=""
	I1205 06:27:30.597934   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:30.598220   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:31.097682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.097755   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.098095   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:31.098146   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:31.597684   48520 type.go:168] "Request Body" body=""
	I1205 06:27:31.597758   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:31.598081   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.097685   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.098017   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:32.597761   48520 type.go:168] "Request Body" body=""
	I1205 06:27:32.597831   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:32.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:33.097892   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.097972   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.098291   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:33.098349   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:33.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:27:33.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:33.598028   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.097717   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.097792   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:34.598024   48520 type.go:168] "Request Body" body=""
	I1205 06:27:34.598102   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:34.598417   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:35.098221   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.098307   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.098585   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:35.098636   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:35.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:27:35.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:35.598796   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.098479   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.098560   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.098901   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:36.598360   48520 type.go:168] "Request Body" body=""
	I1205 06:27:36.598431   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:36.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:37.098449   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.098865   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:37.098917   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:37.598548   48520 type.go:168] "Request Body" body=""
	I1205 06:27:37.598615   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:37.598889   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.100439   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.100533   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.100821   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:38.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:27:38.598323   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:38.598665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:39.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.098663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.099057   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:39.099121   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:39.597976   48520 type.go:168] "Request Body" body=""
	I1205 06:27:39.598045   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:39.598334   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.097698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.097770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.098100   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:40.597799   48520 type.go:168] "Request Body" body=""
	I1205 06:27:40.597871   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:40.598186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.097664   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.097730   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.097993   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:41.597682   48520 type.go:168] "Request Body" body=""
	I1205 06:27:41.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:41.598068   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:41.598120   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:42.097726   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.097810   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.098183   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:42.597650   48520 type.go:168] "Request Body" body=""
	I1205 06:27:42.597716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:42.597968   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.097666   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.097744   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.098084   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:43.598402   48520 type.go:168] "Request Body" body=""
	I1205 06:27:43.598480   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:43.598777   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:43.598825   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:44.098381   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.098457   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.098721   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:44.597605   48520 type.go:168] "Request Body" body=""
	I1205 06:27:44.597678   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:44.598013   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.097828   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.098242   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:45.597563   48520 type.go:168] "Request Body" body=""
	I1205 06:27:45.597635   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:45.597935   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:46.097599   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.097672   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.097994   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:46.098054   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:46.597698   48520 type.go:168] "Request Body" body=""
	I1205 06:27:46.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:46.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.098397   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.098474   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.098743   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:47.598385   48520 type.go:168] "Request Body" body=""
	I1205 06:27:47.598461   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:47.598785   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:48.098501   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.098580   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.098912   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:48.098971   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:48.598554   48520 type.go:168] "Request Body" body=""
	I1205 06:27:48.598624   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:48.598891   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.097780   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.098243   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:49.598130   48520 type.go:168] "Request Body" body=""
	I1205 06:27:49.598205   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:49.598520   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.098061   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.098132   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.098478   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:50.598272   48520 type.go:168] "Request Body" body=""
	I1205 06:27:50.598348   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:50.598647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:50.598692   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:51.098399   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.098484   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.098838   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:51.598356   48520 type.go:168] "Request Body" body=""
	I1205 06:27:51.598424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:51.598698   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.098686   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.098782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.099141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:52.597853   48520 type.go:168] "Request Body" body=""
	I1205 06:27:52.597931   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:52.598221   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:53.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.098015   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:53.597727   48520 type.go:168] "Request Body" body=""
	I1205 06:27:53.597807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:53.598187   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.097889   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.097964   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.098296   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:54.598052   48520 type.go:168] "Request Body" body=""
	I1205 06:27:54.598125   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:54.598384   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:55.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.098128   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.098471   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:55.098525   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:55.598047   48520 type.go:168] "Request Body" body=""
	I1205 06:27:55.598117   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:55.598443   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.098223   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.098308   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.098582   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:56.598347   48520 type.go:168] "Request Body" body=""
	I1205 06:27:56.598418   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:56.598724   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:57.098522   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.098600   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.098946   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:57.099013   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:27:57.597645   48520 type.go:168] "Request Body" body=""
	I1205 06:27:57.597724   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:57.598038   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.097753   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.098233   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:58.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:27:58.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:58.598134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.097720   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.097785   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.098035   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:27:59.598043   48520 type.go:168] "Request Body" body=""
	I1205 06:27:59.598109   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:27:59.598433   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:27:59.598492   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:00.098337   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.098427   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.098788   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:00.598433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:00.598513   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:00.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.098654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.098740   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.099090   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:01.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:28:01.597780   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:01.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:02.097660   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.097737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.098064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:02.098117   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:02.597688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:02.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:02.598060   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.097737   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.097813   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:03.597667   48520 type.go:168] "Request Body" body=""
	I1205 06:28:03.597737   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:03.597990   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.097756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.098055   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:04.597984   48520 type.go:168] "Request Body" body=""
	I1205 06:28:04.598055   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:04.598390   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:04.598444   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:05.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.097903   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.098218   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:05.597683   48520 type.go:168] "Request Body" body=""
	I1205 06:28:05.597767   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:05.598108   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.097810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.098254   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:06.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:06.597765   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:06.598082   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:07.097703   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.097776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:07.098142   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:07.597834   48520 type.go:168] "Request Body" body=""
	I1205 06:28:07.597911   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:07.598223   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.097644   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.097716   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.098023   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:08.597760   48520 type.go:168] "Request Body" body=""
	I1205 06:28:08.597837   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:08.598171   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:09.097935   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.098013   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.098328   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:09.098389   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:09.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:09.598116   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:09.598364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.097684   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.097762   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.098113   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:10.597798   48520 type.go:168] "Request Body" body=""
	I1205 06:28:10.597870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:10.598188   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.097802   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.098134   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:11.597795   48520 type.go:168] "Request Body" body=""
	I1205 06:28:11.597864   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:11.598173   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:11.598226   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:12.097903   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.097983   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.098374   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:12.597639   48520 type.go:168] "Request Body" body=""
	I1205 06:28:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:12.597970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.097632   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.097707   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.098032   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:13.597706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:13.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:13.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:14.097811   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.097887   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.098156   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:14.098195   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:14.598036   48520 type.go:168] "Request Body" body=""
	I1205 06:28:14.598112   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:14.598466   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.097733   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.098140   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:15.597813   48520 type.go:168] "Request Body" body=""
	I1205 06:28:15.597884   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:15.598142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.097724   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.097796   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:16.597769   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:16.598101   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:16.598156   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:17.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.097876   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.098194   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:17.597663   48520 type.go:168] "Request Body" body=""
	I1205 06:28:17.597756   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:17.598070   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.097790   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.098215   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:18.597886   48520 type.go:168] "Request Body" body=""
	I1205 06:28:18.597953   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:18.598207   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:18.598246   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:19.098191   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.098264   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.098596   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:19.598092   48520 type.go:168] "Request Body" body=""
	I1205 06:28:19.598164   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:19.598453   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.098276   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.098366   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.098647   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:20.598475   48520 type.go:168] "Request Body" body=""
	I1205 06:28:20.598565   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:20.598966   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:20.599023   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:21.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.097783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.098141   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:21.597816   48520 type.go:168] "Request Body" body=""
	I1205 06:28:21.597890   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:21.598175   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.097699   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.097778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:22.597810   48520 type.go:168] "Request Body" body=""
	I1205 06:28:22.597883   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:22.598225   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:23.097913   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.097992   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.098257   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:23.098301   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:23.597679   48520 type.go:168] "Request Body" body=""
	I1205 06:28:23.597751   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:23.598083   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.097782   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.097861   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.098209   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:24.598076   48520 type.go:168] "Request Body" body=""
	I1205 06:28:24.598142   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:24.598394   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:25.098055   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.098130   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.098501   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:25.098557   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:25.598048   48520 type.go:168] "Request Body" body=""
	I1205 06:28:25.598119   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:25.598461   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.098278   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.098345   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.098636   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:26.598407   48520 type.go:168] "Request Body" body=""
	I1205 06:28:26.598479   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:26.598794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:27.098588   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.098668   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.099022   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:27.099091   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:27.598346   48520 type.go:168] "Request Body" body=""
	I1205 06:28:27.598412   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:27.598675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.098428   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.098506   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.098818   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:28.598580   48520 type.go:168] "Request Body" body=""
	I1205 06:28:28.598652   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:28.598974   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.097654   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.097745   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.098125   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:29.598001   48520 type.go:168] "Request Body" body=""
	I1205 06:28:29.598100   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:29.598428   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:29.598481   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:30.098006   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.098087   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:30.597700   48520 type.go:168] "Request Body" body=""
	I1205 06:28:30.597786   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:30.598160   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.097774   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.097846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.098181   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:31.597850   48520 type.go:168] "Request Body" body=""
	I1205 06:28:31.597930   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:31.598261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:32.097657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.097732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.098067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:32.098128   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:32.597787   48520 type.go:168] "Request Body" body=""
	I1205 06:28:32.597865   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:32.598198   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.097897   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.097968   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.098282   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:33.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:33.597749   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:33.597992   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.097680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:34.597944   48520 type.go:168] "Request Body" body=""
	I1205 06:28:34.598021   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:34.598350   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:34.598404   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:35.097649   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.097714   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:35.598388   48520 type.go:168] "Request Body" body=""
	I1205 06:28:35.598459   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:35.598762   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.098567   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.098647   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.098983   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:36.598412   48520 type.go:168] "Request Body" body=""
	I1205 06:28:36.598488   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:36.598831   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:36.598888   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:37.098658   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.098727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.099076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:37.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:28:37.597782   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:37.598120   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.097852   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.098158   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:38.597693   48520 type.go:168] "Request Body" body=""
	I1205 06:28:38.597763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:38.598087   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:39.097696   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.097815   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.098132   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:39.098185   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:39.598044   48520 type.go:168] "Request Body" body=""
	I1205 06:28:39.598118   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:39.598367   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.097715   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.098133   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:40.597711   48520 type.go:168] "Request Body" body=""
	I1205 06:28:40.597788   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:40.598088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.097630   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.097700   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.097961   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:41.597651   48520 type.go:168] "Request Body" body=""
	I1205 06:28:41.597723   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:41.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:41.598088   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:42.097792   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.097874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.098293   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:42.597741   48520 type.go:168] "Request Body" body=""
	I1205 06:28:42.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:42.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.097688   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.097766   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.098114   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:43.598395   48520 type.go:168] "Request Body" body=""
	I1205 06:28:43.598466   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:43.598780   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:43.598827   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:44.098371   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.098438   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.098690   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:44.598631   48520 type.go:168] "Request Body" body=""
	I1205 06:28:44.598705   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:44.598985   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.097712   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.097807   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.098219   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:45.597940   48520 type.go:168] "Request Body" body=""
	I1205 06:28:45.598006   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:45.598275   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:46.097986   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.098060   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.098414   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:46.098473   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:46.598239   48520 type.go:168] "Request Body" body=""
	I1205 06:28:46.598322   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:46.598689   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.098433   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.098520   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.098851   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:47.598595   48520 type.go:168] "Request Body" body=""
	I1205 06:28:47.598663   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:47.598967   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.097704   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.097777   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.098143   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:48.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:28:48.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:48.598005   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:48.598051   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:49.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.097752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.098085   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:49.597937   48520 type.go:168] "Request Body" body=""
	I1205 06:28:49.598007   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:49.598332   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.097650   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.097722   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.098001   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:50.597695   48520 type.go:168] "Request Body" body=""
	I1205 06:28:50.597770   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:50.598180   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:50.598236   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:51.097912   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.097985   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.098261   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:51.597646   48520 type.go:168] "Request Body" body=""
	I1205 06:28:51.597720   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:51.598030   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.097764   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.097844   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.098191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:52.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:28:52.597753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:52.598033   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:53.097629   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.097704   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.098008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:53.098066   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:53.597719   48520 type.go:168] "Request Body" body=""
	I1205 06:28:53.597789   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:53.598103   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.097812   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.097886   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.098214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:54.597970   48520 type.go:168] "Request Body" body=""
	I1205 06:28:54.598039   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:54.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:55.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.098096   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.098427   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:55.098479   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:55.598243   48520 type.go:168] "Request Body" body=""
	I1205 06:28:55.598312   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:55.598632   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.098325   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.098392   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.098659   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:56.598446   48520 type.go:168] "Request Body" body=""
	I1205 06:28:56.598523   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:56.598834   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:57.098623   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.098697   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.099008   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:57.099062   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:28:57.597638   48520 type.go:168] "Request Body" body=""
	I1205 06:28:57.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:57.597977   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.097791   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:58.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:28:58.597925   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:58.598287   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.098257   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.098326   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.098588   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:28:59.598617   48520 type.go:168] "Request Body" body=""
	I1205 06:28:59.598687   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:28:59.598989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:28:59.599048   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:00.097762   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.098260   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:00.597666   48520 type.go:168] "Request Body" body=""
	I1205 06:29:00.597732   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:00.598027   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.097709   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.097787   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.098107   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:01.597703   48520 type.go:168] "Request Body" body=""
	I1205 06:29:01.597776   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:01.598115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:02.097797   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.097873   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.098139   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:02.098179   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:02.597827   48520 type.go:168] "Request Body" body=""
	I1205 06:29:02.597901   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:02.598200   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.097697   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.097768   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.098111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:03.597806   48520 type.go:168] "Request Body" body=""
	I1205 06:29:03.597879   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:03.598126   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.097763   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:04.597968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:04.598041   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:04.598369   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:04.598426   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:05.097837   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.097910   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.098172   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:05.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:05.597991   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:05.598317   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.097677   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.097754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:06.597807   48520 type.go:168] "Request Body" body=""
	I1205 06:29:06.597874   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:06.598213   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:07.097658   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.097727   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.098007   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:07.098053   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:07.597689   48520 type.go:168] "Request Body" body=""
	I1205 06:29:07.597779   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:07.598111   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.097779   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.097848   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.098144   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:08.597825   48520 type.go:168] "Request Body" body=""
	I1205 06:29:08.597894   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:08.598214   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:09.098293   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.098665   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:09.098713   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:09.598091   48520 type.go:168] "Request Body" body=""
	I1205 06:29:09.598166   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:09.598438   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.098197   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.098285   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.098633   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:10.598426   48520 type.go:168] "Request Body" body=""
	I1205 06:29:10.598502   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:10.598789   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.098356   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.098424   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.098675   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:11.598534   48520 type.go:168] "Request Body" body=""
	I1205 06:29:11.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:11.598933   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:11.598983   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:12.097671   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.097742   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.098072   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:12.597634   48520 type.go:168] "Request Body" body=""
	I1205 06:29:12.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:12.598000   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.097706   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.097794   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.098142   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:13.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:13.597778   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:13.598099   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:14.098299   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.098367   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.098625   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:14.098664   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:14.598601   48520 type.go:168] "Request Body" body=""
	I1205 06:29:14.598674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:14.598962   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.097681   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.097771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.098116   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:15.597644   48520 type.go:168] "Request Body" body=""
	I1205 06:29:15.597712   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:15.597989   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.097675   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.097753   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.098088   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:16.597699   48520 type.go:168] "Request Body" body=""
	I1205 06:29:16.597771   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:16.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:16.598176   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:17.097800   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.097870   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.098186   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:17.597678   48520 type.go:168] "Request Body" body=""
	I1205 06:29:17.597754   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:17.598064   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.097718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.097790   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.098115   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:18.597657   48520 type.go:168] "Request Body" body=""
	I1205 06:29:18.597726   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:18.598071   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:19.097692   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.098110   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:19.098171   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:19.597903   48520 type.go:168] "Request Body" body=""
	I1205 06:29:19.597976   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:19.598294   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.097887   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.097966   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.098232   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:20.597680   48520 type.go:168] "Request Body" body=""
	I1205 06:29:20.597752   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:20.598138   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:21.097832   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.097904   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.098238   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:21.098290   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:21.597915   48520 type.go:168] "Request Body" body=""
	I1205 06:29:21.597989   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:21.598308   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.097687   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.097764   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.098076   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:22.597707   48520 type.go:168] "Request Body" body=""
	I1205 06:29:22.597793   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:22.598119   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.097639   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.097713   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.098012   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:23.598468   48520 type.go:168] "Request Body" body=""
	I1205 06:29:23.598537   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:23.598805   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:23.598850   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:24.098628   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.098711   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.099042   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:24.598066   48520 type.go:168] "Request Body" body=""
	I1205 06:29:24.598140   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:24.598436   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.098234   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.098304   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.098635   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:25.598442   48520 type.go:168] "Request Body" body=""
	I1205 06:29:25.598521   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:25.598813   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:26.098310   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.098374   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.098634   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:26.098672   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:26.598467   48520 type.go:168] "Request Body" body=""
	I1205 06:29:26.598542   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:26.598836   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.098604   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.098674   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.099002   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:27.597627   48520 type.go:168] "Request Body" body=""
	I1205 06:29:27.597694   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:27.597979   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.097729   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.097806   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.098118   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:28.597718   48520 type.go:168] "Request Body" body=""
	I1205 06:29:28.597809   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:28.598105   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:28.598160   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:29.097982   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.098056   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.098364   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:29.598171   48520 type.go:168] "Request Body" body=""
	I1205 06:29:29.598241   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:29.598550   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.098366   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.098439   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.098794   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:30.598344   48520 type.go:168] "Request Body" body=""
	I1205 06:29:30.598414   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:30.598658   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:30.598696   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:31.098524   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.098599   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.098930   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:31.597642   48520 type.go:168] "Request Body" body=""
	I1205 06:29:31.597710   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:31.598056   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.097640   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.097715   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.097970   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:32.597670   48520 type.go:168] "Request Body" body=""
	I1205 06:29:32.597783   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:32.598096   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:33.097781   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.097856   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.098197   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:33.098249   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:33.598550   48520 type.go:168] "Request Body" body=""
	I1205 06:29:33.598618   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:33.598869   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.097577   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.097658   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.097965   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:34.597894   48520 type.go:168] "Request Body" body=""
	I1205 06:29:34.597969   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:34.598297   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:35.097968   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.098046   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.098335   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:35.098382   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:35.597705   48520 type.go:168] "Request Body" body=""
	I1205 06:29:35.597795   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:35.598375   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.097722   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.097802   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.098192   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:36.597753   48520 type.go:168] "Request Body" body=""
	I1205 06:29:36.597820   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:36.598067   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.097731   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.097808   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.098094   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:37.597775   48520 type.go:168] "Request Body" body=""
	I1205 06:29:37.597846   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:37.598191   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:29:37.598248   48520 node_ready.go:55] error getting node "functional-101526" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-101526": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:29:38.097901   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.097981   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.098280   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:38.597967   48520 type.go:168] "Request Body" body=""
	I1205 06:29:38.598047   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:38.598406   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.098350   48520 type.go:168] "Request Body" body=""
	I1205 06:29:39.098430   48520 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-101526" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:29:39.098781   48520 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:29:39.598588   48520 node_ready.go:38] duration metric: took 6m0.001106708s for node "functional-101526" to be "Ready" ...
	I1205 06:29:39.600415   48520 out.go:203] 
	W1205 06:29:39.601638   48520 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:29:39.601661   48520 out.go:285] * 
	W1205 06:29:39.603936   48520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:29:39.604891   48520 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:29:47 functional-101526 containerd[5817]: time="2025-12-05T06:29:47.152273988Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:48 functional-101526 containerd[5817]: time="2025-12-05T06:29:48.113692449Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 05 06:29:48 functional-101526 containerd[5817]: time="2025-12-05T06:29:48.115895789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 05 06:29:48 functional-101526 containerd[5817]: time="2025-12-05T06:29:48.124136605Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:48 functional-101526 containerd[5817]: time="2025-12-05T06:29:48.124626063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.066473548Z" level=info msg="No images store for sha256:a9ca98d5566ed58a7d480e0b547a763d077f5729130098d82d4323899cd8629c"
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.068671169Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-101526\""
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.079777438Z" level=info msg="ImageCreate event name:\"sha256:da10500e63c801b54da78f8674131cdf4c08048aa0546512b5c303fbd1d46fc4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.080191957Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-101526\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.868471490Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.871112890Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.874206308Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 05 06:29:49 functional-101526 containerd[5817]: time="2025-12-05T06:29:49.885755494Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.925762832Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.928324535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.935676430Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.936150864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.959895095Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.962193960Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.964242689Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 05 06:29:50 functional-101526 containerd[5817]: time="2025-12-05T06:29:50.972002933Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 05 06:29:51 functional-101526 containerd[5817]: time="2025-12-05T06:29:51.100203036Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 05 06:29:51 functional-101526 containerd[5817]: time="2025-12-05T06:29:51.102505036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 05 06:29:51 functional-101526 containerd[5817]: time="2025-12-05T06:29:51.110296271Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:29:51 functional-101526 containerd[5817]: time="2025-12-05T06:29:51.110594687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:29:55.132635    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:55.133029    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:55.134686    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:55.135111    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:29:55.136889    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:29:55 up  1:12,  0 user,  load average: 0.40, 0.30, 0.52
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:29:51 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:52 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 05 06:29:52 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:52 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:52 functional-101526 kubelet[9695]: E1205 06:29:52.423717    9695 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:52 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:52 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:53 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 05 06:29:53 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:53 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:53 functional-101526 kubelet[9794]: E1205 06:29:53.156260    9794 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:53 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:53 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:53 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 05 06:29:53 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:53 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:53 functional-101526 kubelet[9815]: E1205 06:29:53.903561    9815 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:53 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:53 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:29:54 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 05 06:29:54 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:54 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:29:54 functional-101526 kubelet[9836]: E1205 06:29:54.660041    9836 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:29:54 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:29:54 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (343.375757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101526 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 06:33:01.801070    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:34:14.020607    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:37.086296    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:38:01.797660    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:39:14.019811    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-101526 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.074580205s)

                                                
                                                
-- stdout --
	* [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288295s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-101526 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.075881929s for "functional-101526" cluster.
I1205 06:42:09.112394    4192 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (330.6637ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-226068 image ls --format yaml --alsologtostderr                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format json --alsologtostderr                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format table --alsologtostderr                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ ssh     │ functional-226068 ssh pgrep buildkitd                                                                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ image   │ functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr                                                  │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls                                                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ delete  │ -p functional-226068                                                                                                                                    │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ start   │ -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ start   │ -p functional-101526 --alsologtostderr -v=8                                                                                                             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:latest                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add minikube-local-cache-test:functional-101526                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache delete minikube-local-cache-test:functional-101526                                                                              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl images                                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	│ cache   │ functional-101526 cache reload                                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ kubectl │ functional-101526 kubectl -- --context functional-101526 get pods                                                                                       │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	│ start   │ -p functional-101526 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:29:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:29:56.087419   54335 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:29:56.087558   54335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:29:56.087562   54335 out.go:374] Setting ErrFile to fd 2...
	I1205 06:29:56.087566   54335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:29:56.087860   54335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:29:56.088207   54335 out.go:368] Setting JSON to false
	I1205 06:29:56.088971   54335 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4343,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:29:56.089024   54335 start.go:143] virtualization:  
	I1205 06:29:56.093248   54335 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:29:56.096933   54335 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:29:56.097023   54335 notify.go:221] Checking for updates...
	I1205 06:29:56.100720   54335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:29:56.103681   54335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:29:56.106734   54335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:29:56.110260   54335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:29:56.113288   54335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:29:56.116882   54335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:29:56.116976   54335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:29:56.159923   54335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:29:56.160029   54335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:29:56.216532   54335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:29:56.206341969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:29:56.216625   54335 docker.go:319] overlay module found
	I1205 06:29:56.221471   54335 out.go:179] * Using the docker driver based on existing profile
	I1205 06:29:56.224343   54335 start.go:309] selected driver: docker
	I1205 06:29:56.224353   54335 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:29:56.224443   54335 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:29:56.224557   54335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:29:56.277319   54335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:29:56.268438767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:29:56.277800   54335 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:29:56.277821   54335 cni.go:84] Creating CNI manager for ""
	I1205 06:29:56.277884   54335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:29:56.278047   54335 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:29:56.282961   54335 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:29:56.285729   54335 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:29:56.288624   54335 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:29:56.291591   54335 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:29:56.291657   54335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:29:56.310650   54335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:29:56.310660   54335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:29:56.348534   54335 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:29:56.550462   54335 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:29:56.550637   54335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:29:56.550701   54335 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550781   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:29:56.550790   54335 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.262µs
	I1205 06:29:56.550802   54335 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:29:56.550812   54335 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550840   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:29:56.550844   54335 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 33.707µs
	I1205 06:29:56.550849   54335 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550857   54335 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550888   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:29:56.550892   54335 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.93µs
	I1205 06:29:56.550897   54335 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550906   54335 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550932   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:29:56.550937   54335 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:29:56.550939   54335 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.076µs
	I1205 06:29:56.550944   54335 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550952   54335 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550977   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:29:56.550965   54335 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550981   54335 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.187µs
	I1205 06:29:56.550986   54335 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550993   54335 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551016   54335 start.go:364] duration metric: took 28.546µs to acquireMachinesLock for "functional-101526"
	I1205 06:29:56.551022   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:29:56.551025   54335 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.24µs
	I1205 06:29:56.551035   54335 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:29:56.551034   54335 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:29:56.551039   54335 fix.go:54] fixHost starting: 
	I1205 06:29:56.551042   54335 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551065   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:29:56.551069   54335 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.16µs
	I1205 06:29:56.551073   54335 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:29:56.551081   54335 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551103   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:29:56.551106   54335 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 26.888µs
	I1205 06:29:56.551110   54335 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:29:56.551117   54335 cache.go:87] Successfully saved all images to host disk.
	I1205 06:29:56.551339   54335 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:29:56.568156   54335 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:29:56.568181   54335 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:29:56.571582   54335 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:29:56.571608   54335 machine.go:94] provisionDockerMachine start ...
	I1205 06:29:56.571688   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.588675   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.588995   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.589001   54335 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:29:56.736543   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:29:56.736557   54335 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:29:56.736615   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.754489   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.754781   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.754789   54335 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:29:56.915291   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:29:56.915355   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.933044   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.933393   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.933407   54335 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:29:57.085183   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:29:57.085199   54335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:29:57.085221   54335 ubuntu.go:190] setting up certificates
	I1205 06:29:57.085229   54335 provision.go:84] configureAuth start
	I1205 06:29:57.085299   54335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:29:57.101349   54335 provision.go:143] copyHostCerts
	I1205 06:29:57.101410   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:29:57.101421   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:29:57.101492   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:29:57.101592   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:29:57.101596   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:29:57.101621   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:29:57.101678   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:29:57.101680   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:29:57.101703   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:29:57.101750   54335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:29:57.543303   54335 provision.go:177] copyRemoteCerts
	I1205 06:29:57.543357   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:29:57.543409   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.560691   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:57.666006   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:29:57.683446   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:29:57.700645   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:29:57.717863   54335 provision.go:87] duration metric: took 632.597506ms to configureAuth
	I1205 06:29:57.717880   54335 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:29:57.718064   54335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:29:57.718070   54335 machine.go:97] duration metric: took 1.146457487s to provisionDockerMachine
	I1205 06:29:57.718076   54335 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:29:57.718086   54335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:29:57.718137   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:29:57.718174   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.735331   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:57.841496   54335 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:29:57.844702   54335 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:29:57.844721   54335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:29:57.844731   54335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:29:57.844783   54335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:29:57.844859   54335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:29:57.844934   54335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:29:57.844984   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:29:57.852337   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:29:57.869668   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:29:57.887019   54335 start.go:296] duration metric: took 168.92936ms for postStartSetup
	I1205 06:29:57.887102   54335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:29:57.887149   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.903894   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.011756   54335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:29:58.016900   54335 fix.go:56] duration metric: took 1.465853892s for fixHost
	I1205 06:29:58.016919   54335 start.go:83] releasing machines lock for "functional-101526", held for 1.465896107s
	I1205 06:29:58.016988   54335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:29:58.035591   54335 ssh_runner.go:195] Run: cat /version.json
	I1205 06:29:58.035642   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:58.035909   54335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:29:58.035957   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:58.053529   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.058886   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.156784   54335 ssh_runner.go:195] Run: systemctl --version
	I1205 06:29:58.245777   54335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:29:58.249918   54335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:29:58.249974   54335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:29:58.257133   54335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:29:58.257146   54335 start.go:496] detecting cgroup driver to use...
	I1205 06:29:58.257190   54335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:29:58.257233   54335 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:29:58.273979   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:29:58.288748   54335 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:29:58.288814   54335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:29:58.305248   54335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:29:58.319216   54335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:29:58.440307   54335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:29:58.559446   54335 docker.go:234] disabling docker service ...
	I1205 06:29:58.559504   54335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:29:58.574399   54335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:29:58.587407   54335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:29:58.701676   54335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:29:58.808689   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:29:58.821276   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:29:58.836401   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:29:58.846421   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:29:58.855275   54335 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:29:58.855341   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:29:58.864125   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:29:58.872649   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:29:58.881354   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:29:58.890354   54335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:29:58.898337   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:29:58.907106   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:29:58.915882   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:29:58.924414   54335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:29:58.931809   54335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:29:58.939114   54335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:29:59.065680   54335 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:29:59.195981   54335 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:29:59.196040   54335 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:29:59.199987   54335 start.go:564] Will wait 60s for crictl version
	I1205 06:29:59.200039   54335 ssh_runner.go:195] Run: which crictl
	I1205 06:29:59.203560   54335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:29:59.235649   54335 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:29:59.235710   54335 ssh_runner.go:195] Run: containerd --version
	I1205 06:29:59.255405   54335 ssh_runner.go:195] Run: containerd --version
	I1205 06:29:59.283346   54335 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:29:59.286262   54335 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:29:59.301845   54335 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:29:59.308610   54335 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:29:59.311441   54335 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:29:59.311553   54335 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:29:59.311627   54335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:29:59.336067   54335 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:29:59.336079   54335 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:29:59.336085   54335 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:29:59.336175   54335 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:29:59.336232   54335 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:29:59.363378   54335 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:29:59.363395   54335 cni.go:84] Creating CNI manager for ""
	I1205 06:29:59.363403   54335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:29:59.363415   54335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:29:59.363436   54335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:29:59.363559   54335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:29:59.363624   54335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:29:59.371046   54335 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:29:59.371108   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:29:59.378354   54335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:29:59.390503   54335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:29:59.402745   54335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1205 06:29:59.414910   54335 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:29:59.418646   54335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:29:59.529578   54335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:29:59.846402   54335 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:29:59.846413   54335 certs.go:195] generating shared ca certs ...
	I1205 06:29:59.846426   54335 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:29:59.846569   54335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:29:59.846610   54335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:29:59.846616   54335 certs.go:257] generating profile certs ...
	I1205 06:29:59.846728   54335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:29:59.846770   54335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:29:59.846811   54335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:29:59.846921   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:29:59.846956   54335 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:29:59.846962   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:29:59.846989   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:29:59.847014   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:29:59.847036   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:29:59.847085   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:29:59.847736   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:29:59.867939   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:29:59.888562   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:29:59.907283   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:29:59.927879   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:29:59.944224   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:29:59.960459   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:29:59.979078   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:29:59.996293   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:30:00.066962   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:30:00.118991   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:30:00.185989   54335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:30:00.235503   54335 ssh_runner.go:195] Run: openssl version
	I1205 06:30:00.255104   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.270140   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:30:00.290181   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.295705   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.295771   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.399762   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:30:00.412238   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.433387   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:30:00.449934   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.455249   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.455319   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.517764   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:30:00.530824   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.546605   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:30:00.555560   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.561005   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.561068   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.611790   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:30:00.623580   54335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:30:00.628736   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:30:00.674439   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:30:00.717432   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:30:00.760669   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:30:00.802949   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:30:00.845730   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:30:00.892769   54335 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:30:00.892871   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:30:00.892957   54335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:30:00.923464   54335 cri.go:89] found id: ""
	I1205 06:30:00.923530   54335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:30:00.932111   54335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:30:00.932122   54335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:30:00.932182   54335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:30:00.940210   54335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:00.940808   54335 kubeconfig.go:125] found "functional-101526" server: "https://192.168.49.2:8441"
	I1205 06:30:00.942221   54335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:30:00.951085   54335 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:15:26.552544518 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:29:59.409281720 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:30:00.951105   54335 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:30:00.951116   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1205 06:30:00.951177   54335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:30:00.983535   54335 cri.go:89] found id: ""
	I1205 06:30:00.983600   54335 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:30:00.999793   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:30:01.011193   54335 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  5 06:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5628 Dec  5 06:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  5 06:19 /etc/kubernetes/scheduler.conf
	
	I1205 06:30:01.011277   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:30:01.020421   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:30:01.029014   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.029083   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:30:01.037495   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:30:01.045879   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.045943   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:30:01.054299   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:30:01.063067   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.063128   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:30:01.071319   54335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:30:01.080035   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:01.126871   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.550689   54335 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.423791138s)
	I1205 06:30:02.550750   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.758304   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.826924   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.872904   54335 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:30:02.872975   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:03.373516   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:03.873269   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:04.373873   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:04.873262   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:05.374099   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:05.873790   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:06.374013   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:06.873783   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:07.373319   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:07.874006   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:08.374019   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:08.873288   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:09.373772   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:09.873842   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:10.373300   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:10.874107   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:11.373177   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:11.873355   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:12.373736   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:12.873308   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:13.374049   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:13.873112   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:14.374044   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:14.873826   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:15.373350   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:15.873570   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:16.373205   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:16.873133   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:17.373949   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:17.873343   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:18.373376   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:18.873437   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:19.373102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:19.874076   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:20.373694   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:20.873676   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:21.373293   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:21.873915   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:22.373279   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:22.873197   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:23.373182   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:23.873041   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:24.373194   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:24.873913   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:25.373334   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:25.874011   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:26.373620   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:26.873898   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:27.373174   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:27.874034   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:28.373282   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:28.873430   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:29.374096   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:29.873271   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:30.373863   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:30.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:31.373041   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:31.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:32.373311   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:32.873944   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:33.373660   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:33.873399   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:34.373269   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:34.873154   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:35.374056   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:35.873925   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:36.373314   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:36.873816   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:37.373079   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:37.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:38.373973   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:38.873278   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:39.373892   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:39.873395   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:40.373274   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:40.874009   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:41.374054   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:41.873330   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:42.373986   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:42.873130   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:43.373582   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:43.873189   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:44.373894   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:44.873102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:45.373202   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:45.873349   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:46.373273   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:46.873147   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:47.374102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:47.873856   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:48.374059   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:48.873728   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:49.373337   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:49.873152   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:50.373886   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:50.873110   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:51.373740   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:51.873807   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:52.373287   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:52.873175   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:53.373983   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:53.873898   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:54.374080   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:54.873113   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:55.373274   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:55.874004   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:56.373964   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:56.873273   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:57.373188   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:57.873857   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:58.373297   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:58.873189   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:59.373797   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:59.874078   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:00.374118   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:00.873073   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:01.373094   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:01.873990   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:02.373960   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:02.873246   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:02.873355   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:02.899119   54335 cri.go:89] found id: ""
	I1205 06:31:02.899133   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.899140   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:02.899145   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:02.899201   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:02.926015   54335 cri.go:89] found id: ""
	I1205 06:31:02.926028   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.926036   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:02.926041   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:02.926100   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:02.950775   54335 cri.go:89] found id: ""
	I1205 06:31:02.950788   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.950795   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:02.950800   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:02.950859   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:02.978268   54335 cri.go:89] found id: ""
	I1205 06:31:02.978282   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.978289   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:02.978294   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:02.978352   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:03.015482   54335 cri.go:89] found id: ""
	I1205 06:31:03.015497   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.015506   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:03.015511   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:03.015575   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:03.041353   54335 cri.go:89] found id: ""
	I1205 06:31:03.041366   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.041373   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:03.041379   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:03.041463   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:03.066457   54335 cri.go:89] found id: ""
	I1205 06:31:03.066472   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.066479   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:03.066487   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:03.066502   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:03.121069   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:03.121087   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:03.131794   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:03.131809   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:03.195836   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:03.188092   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.188541   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190139   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190560   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.191959   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:03.188092   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.188541   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190139   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190560   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.191959   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:03.195847   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:03.195859   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:03.258177   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:03.258195   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:05.785947   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:05.795932   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:05.795992   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:05.822996   54335 cri.go:89] found id: ""
	I1205 06:31:05.823010   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.823017   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:05.823022   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:05.823079   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:05.851647   54335 cri.go:89] found id: ""
	I1205 06:31:05.851660   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.851667   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:05.851671   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:05.851728   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:05.888840   54335 cri.go:89] found id: ""
	I1205 06:31:05.888853   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.888860   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:05.888865   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:05.888923   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:05.916749   54335 cri.go:89] found id: ""
	I1205 06:31:05.916763   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.916771   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:05.916776   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:05.916838   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:05.941885   54335 cri.go:89] found id: ""
	I1205 06:31:05.941898   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.941905   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:05.941910   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:05.941970   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:05.967174   54335 cri.go:89] found id: ""
	I1205 06:31:05.967188   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.967195   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:05.967202   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:05.967259   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:05.991608   54335 cri.go:89] found id: ""
	I1205 06:31:05.991622   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.991629   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:05.991637   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:05.991647   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:06.048885   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:06.048907   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:06.060386   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:06.060403   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:06.139830   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:06.132213   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.132764   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134526   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134986   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.136558   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:06.132213   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.132764   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134526   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134986   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.136558   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:06.139840   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:06.139853   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:06.202288   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:06.202307   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:08.730029   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:08.740211   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:08.740272   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:08.763977   54335 cri.go:89] found id: ""
	I1205 06:31:08.763991   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.763998   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:08.764004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:08.764064   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:08.788621   54335 cri.go:89] found id: ""
	I1205 06:31:08.788635   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.788642   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:08.788647   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:08.788702   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:08.813441   54335 cri.go:89] found id: ""
	I1205 06:31:08.813454   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.813461   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:08.813466   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:08.813522   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:08.837930   54335 cri.go:89] found id: ""
	I1205 06:31:08.837944   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.837951   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:08.837956   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:08.838014   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:08.865898   54335 cri.go:89] found id: ""
	I1205 06:31:08.865911   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.865918   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:08.865923   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:08.865985   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:08.893385   54335 cri.go:89] found id: ""
	I1205 06:31:08.893410   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.893417   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:08.893422   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:08.893488   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:08.922394   54335 cri.go:89] found id: ""
	I1205 06:31:08.922407   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.922414   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:08.922422   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:08.922432   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:08.977895   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:08.977913   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:08.989011   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:08.989025   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:09.057444   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:09.048642   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.049814   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.051664   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.052030   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.053581   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:09.048642   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.049814   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.051664   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.052030   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.053581   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:09.057456   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:09.057471   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:09.119855   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:09.119875   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:11.657869   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:11.668122   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:11.668185   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:11.692170   54335 cri.go:89] found id: ""
	I1205 06:31:11.692183   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.692190   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:11.692195   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:11.692253   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:11.716930   54335 cri.go:89] found id: ""
	I1205 06:31:11.716945   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.716951   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:11.716962   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:11.717031   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:11.741795   54335 cri.go:89] found id: ""
	I1205 06:31:11.741808   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.741815   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:11.741820   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:11.741881   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:11.766411   54335 cri.go:89] found id: ""
	I1205 06:31:11.766425   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.766431   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:11.766437   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:11.766495   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:11.791195   54335 cri.go:89] found id: ""
	I1205 06:31:11.791209   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.791216   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:11.791221   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:11.791280   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:11.819219   54335 cri.go:89] found id: ""
	I1205 06:31:11.819233   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.819245   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:11.819251   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:11.819312   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:11.851464   54335 cri.go:89] found id: ""
	I1205 06:31:11.851478   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.851491   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:11.851498   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:11.851508   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:11.931606   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:11.931625   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:11.960389   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:11.960407   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:12.021080   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:12.021102   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:12.032273   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:12.032290   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:12.097324   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:12.088793   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.089496   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091075   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091390   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.093729   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:12.088793   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.089496   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091075   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091390   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.093729   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:14.597581   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:14.607724   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:14.607782   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:14.632907   54335 cri.go:89] found id: ""
	I1205 06:31:14.632921   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.632928   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:14.632933   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:14.632989   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:14.657884   54335 cri.go:89] found id: ""
	I1205 06:31:14.657898   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.657905   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:14.657910   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:14.657965   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:14.681364   54335 cri.go:89] found id: ""
	I1205 06:31:14.681377   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.681384   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:14.681389   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:14.681462   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:14.709552   54335 cri.go:89] found id: ""
	I1205 06:31:14.709566   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.709573   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:14.709578   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:14.709642   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:14.733105   54335 cri.go:89] found id: ""
	I1205 06:31:14.733118   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.733125   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:14.733130   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:14.733217   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:14.759861   54335 cri.go:89] found id: ""
	I1205 06:31:14.759874   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.759881   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:14.759887   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:14.759943   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:14.785666   54335 cri.go:89] found id: ""
	I1205 06:31:14.785679   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.785686   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:14.785693   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:14.785706   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:14.854767   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:14.841994   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.842592   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844142   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844598   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.846116   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:14.841994   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.842592   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844142   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844598   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.846116   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:14.854785   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:14.854795   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:14.922701   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:14.922719   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:14.953207   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:14.953223   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:15.010462   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:15.010484   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:17.529572   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:17.539788   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:17.539847   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:17.563677   54335 cri.go:89] found id: ""
	I1205 06:31:17.563691   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.563698   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:17.563703   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:17.563774   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:17.593628   54335 cri.go:89] found id: ""
	I1205 06:31:17.593642   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.593649   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:17.593654   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:17.593720   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:17.619071   54335 cri.go:89] found id: ""
	I1205 06:31:17.619084   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.619092   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:17.619097   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:17.619153   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:17.642944   54335 cri.go:89] found id: ""
	I1205 06:31:17.642958   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.642964   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:17.642970   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:17.643037   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:17.667755   54335 cri.go:89] found id: ""
	I1205 06:31:17.667768   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.667775   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:17.667780   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:17.667836   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:17.691060   54335 cri.go:89] found id: ""
	I1205 06:31:17.691073   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.691080   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:17.691085   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:17.691152   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:17.714527   54335 cri.go:89] found id: ""
	I1205 06:31:17.714540   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.714547   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:17.714554   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:17.714564   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:17.777347   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:17.777365   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:17.804848   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:17.804862   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:17.866054   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:17.866072   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:17.877290   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:17.877305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:17.944157   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:17.936780   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.937336   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939068   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939357   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.940820   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:17.936780   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.937336   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939068   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939357   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.940820   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:20.445814   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:20.455929   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:20.456007   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:20.480265   54335 cri.go:89] found id: ""
	I1205 06:31:20.480280   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.480287   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:20.480294   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:20.480371   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:20.504045   54335 cri.go:89] found id: ""
	I1205 06:31:20.504059   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.504065   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:20.504070   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:20.504128   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:20.528811   54335 cri.go:89] found id: ""
	I1205 06:31:20.528824   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.528831   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:20.528836   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:20.528893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:20.553249   54335 cri.go:89] found id: ""
	I1205 06:31:20.553272   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.553279   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:20.553284   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:20.553358   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:20.577735   54335 cri.go:89] found id: ""
	I1205 06:31:20.577767   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.577775   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:20.577780   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:20.577839   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:20.603821   54335 cri.go:89] found id: ""
	I1205 06:31:20.603835   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.603852   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:20.603858   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:20.603955   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:20.632954   54335 cri.go:89] found id: ""
	I1205 06:31:20.632985   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.632992   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:20.633000   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:20.633010   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:20.688822   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:20.688840   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:20.700167   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:20.700183   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:20.766199   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:20.757515   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.758089   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760039   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760823   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.762597   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:20.757515   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.758089   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760039   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760823   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.762597   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:20.766209   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:20.766219   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:20.829413   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:20.829439   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:23.369036   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:23.379250   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:23.379308   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:23.407254   54335 cri.go:89] found id: ""
	I1205 06:31:23.407268   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.407275   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:23.407280   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:23.407335   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:23.431989   54335 cri.go:89] found id: ""
	I1205 06:31:23.432002   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.432009   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:23.432014   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:23.432079   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:23.467269   54335 cri.go:89] found id: ""
	I1205 06:31:23.467287   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.467293   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:23.467299   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:23.467362   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:23.490943   54335 cri.go:89] found id: ""
	I1205 06:31:23.490956   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.490962   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:23.490968   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:23.491025   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:23.519217   54335 cri.go:89] found id: ""
	I1205 06:31:23.519232   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.519239   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:23.519244   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:23.519306   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:23.543863   54335 cri.go:89] found id: ""
	I1205 06:31:23.543877   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.543883   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:23.543888   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:23.543956   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:23.567865   54335 cri.go:89] found id: ""
	I1205 06:31:23.567878   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.567897   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:23.567905   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:23.567914   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:23.632509   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:23.632529   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:23.662290   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:23.662305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:23.719254   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:23.719272   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:23.730331   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:23.730346   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:23.792133   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:23.784315   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.784953   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.786670   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.787328   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.788816   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:23.784315   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.784953   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.786670   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.787328   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.788816   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:26.293128   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:26.304108   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:26.304168   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:26.331011   54335 cri.go:89] found id: ""
	I1205 06:31:26.331024   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.331031   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:26.331040   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:26.331097   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:26.358547   54335 cri.go:89] found id: ""
	I1205 06:31:26.358562   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.358569   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:26.358573   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:26.358630   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:26.387125   54335 cri.go:89] found id: ""
	I1205 06:31:26.387139   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.387146   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:26.387151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:26.387210   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:26.412329   54335 cri.go:89] found id: ""
	I1205 06:31:26.412343   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.412350   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:26.412355   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:26.412433   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:26.437117   54335 cri.go:89] found id: ""
	I1205 06:31:26.437130   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.437138   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:26.437142   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:26.437253   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:26.465767   54335 cri.go:89] found id: ""
	I1205 06:31:26.465779   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.465787   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:26.465792   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:26.465855   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:26.489618   54335 cri.go:89] found id: ""
	I1205 06:31:26.489636   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.489643   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:26.489651   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:26.489661   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:26.516285   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:26.516307   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:26.571623   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:26.571639   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:26.582532   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:26.582547   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:26.648629   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:26.640184   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.640930   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.642740   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.643413   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.644996   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:26.640184   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.640930   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.642740   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.643413   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.644996   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:26.648640   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:26.648652   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:29.213295   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:29.223226   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:29.223291   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:29.248501   54335 cri.go:89] found id: ""
	I1205 06:31:29.248514   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.248521   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:29.248526   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:29.248585   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:29.273551   54335 cri.go:89] found id: ""
	I1205 06:31:29.273564   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.273571   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:29.273576   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:29.273633   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:29.297959   54335 cri.go:89] found id: ""
	I1205 06:31:29.297972   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.297979   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:29.297985   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:29.298043   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:29.322784   54335 cri.go:89] found id: ""
	I1205 06:31:29.322798   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.322809   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:29.322814   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:29.322870   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:29.351067   54335 cri.go:89] found id: ""
	I1205 06:31:29.351080   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.351087   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:29.351092   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:29.351163   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:29.378768   54335 cri.go:89] found id: ""
	I1205 06:31:29.378782   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.378789   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:29.378794   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:29.378854   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:29.403528   54335 cri.go:89] found id: ""
	I1205 06:31:29.403542   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.403549   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:29.403556   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:29.403567   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:29.471248   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:29.463937   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.464521   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466184   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466622   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.467929   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:29.463937   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.464521   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466184   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466622   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.467929   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:29.471259   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:29.471269   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:29.533062   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:29.533080   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:29.564293   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:29.564323   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:29.619083   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:29.619101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:32.130510   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:32.143539   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:32.143642   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:32.171414   54335 cri.go:89] found id: ""
	I1205 06:31:32.171428   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.171436   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:32.171441   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:32.171499   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:32.196112   54335 cri.go:89] found id: ""
	I1205 06:31:32.196125   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.196132   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:32.196137   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:32.196195   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:32.223236   54335 cri.go:89] found id: ""
	I1205 06:31:32.223250   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.223257   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:32.223261   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:32.223317   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:32.247226   54335 cri.go:89] found id: ""
	I1205 06:31:32.247240   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.247247   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:32.247252   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:32.247308   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:32.275892   54335 cri.go:89] found id: ""
	I1205 06:31:32.275905   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.275912   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:32.275918   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:32.275975   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:32.304746   54335 cri.go:89] found id: ""
	I1205 06:31:32.304759   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.304767   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:32.304772   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:32.304831   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:32.329065   54335 cri.go:89] found id: ""
	I1205 06:31:32.329078   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.329085   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:32.329092   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:32.329101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:32.384331   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:32.384349   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:32.395108   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:32.395123   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:32.457079   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:32.449726   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.450348   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.451857   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.452270   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.453749   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:32.449726   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.450348   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.451857   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.452270   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.453749   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:32.457097   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:32.457108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:32.520612   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:32.520631   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:35.049835   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:35.059785   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:35.059850   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:35.084594   54335 cri.go:89] found id: ""
	I1205 06:31:35.084610   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.084617   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:35.084624   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:35.084682   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:35.119519   54335 cri.go:89] found id: ""
	I1205 06:31:35.119533   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.119553   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:35.119559   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:35.119625   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:35.146284   54335 cri.go:89] found id: ""
	I1205 06:31:35.146298   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.146305   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:35.146310   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:35.146370   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:35.174570   54335 cri.go:89] found id: ""
	I1205 06:31:35.174583   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.174590   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:35.174596   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:35.174653   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:35.198347   54335 cri.go:89] found id: ""
	I1205 06:31:35.198361   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.198368   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:35.198374   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:35.198430   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:35.226196   54335 cri.go:89] found id: ""
	I1205 06:31:35.226210   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.226216   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:35.226222   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:35.226281   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:35.250876   54335 cri.go:89] found id: ""
	I1205 06:31:35.250889   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.250897   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:35.250904   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:35.250913   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:35.304930   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:35.304948   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:35.315954   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:35.315970   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:35.377099   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:35.369290   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.369826   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371500   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371964   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.373533   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:35.369290   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.369826   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371500   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371964   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.373533   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:35.377109   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:35.377120   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:35.437784   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:35.437801   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:37.968228   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:37.977892   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:37.977968   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:38.010142   54335 cri.go:89] found id: ""
	I1205 06:31:38.010158   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.010173   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:38.010180   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:38.010249   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:38.048020   54335 cri.go:89] found id: ""
	I1205 06:31:38.048034   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.048041   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:38.048047   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:38.048112   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:38.077977   54335 cri.go:89] found id: ""
	I1205 06:31:38.077991   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.077999   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:38.078004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:38.078068   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:38.115520   54335 cri.go:89] found id: ""
	I1205 06:31:38.115534   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.115541   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:38.115546   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:38.115618   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:38.141580   54335 cri.go:89] found id: ""
	I1205 06:31:38.141593   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.141613   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:38.141618   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:38.141673   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:38.167473   54335 cri.go:89] found id: ""
	I1205 06:31:38.167487   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.167493   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:38.167499   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:38.167565   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:38.190856   54335 cri.go:89] found id: ""
	I1205 06:31:38.190869   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.190876   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:38.190884   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:38.190894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:38.245488   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:38.245505   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:38.255819   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:38.255834   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:38.319935   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:38.311836   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.312540   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314137   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314745   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.316388   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:38.311836   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.312540   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314137   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314745   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.316388   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:38.319952   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:38.319963   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:38.381733   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:38.381750   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:40.911397   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:40.921257   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:40.921321   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:40.947604   54335 cri.go:89] found id: ""
	I1205 06:31:40.947618   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.947625   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:40.947630   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:40.947694   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:40.973136   54335 cri.go:89] found id: ""
	I1205 06:31:40.973148   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.973186   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:40.973191   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:40.973256   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:40.996412   54335 cri.go:89] found id: ""
	I1205 06:31:40.996425   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.996432   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:40.996437   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:40.996497   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:41.024001   54335 cri.go:89] found id: ""
	I1205 06:31:41.024015   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.024022   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:41.024028   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:41.024086   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:41.051496   54335 cri.go:89] found id: ""
	I1205 06:31:41.051510   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.051517   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:41.051522   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:41.051582   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:41.080451   54335 cri.go:89] found id: ""
	I1205 06:31:41.080464   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.080471   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:41.080476   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:41.080533   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:41.117388   54335 cri.go:89] found id: ""
	I1205 06:31:41.117401   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.117409   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:41.117416   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:41.117426   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:41.182349   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:41.182368   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:41.193093   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:41.193108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:41.254159   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:41.246911   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.247523   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249025   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249503   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.250928   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:41.246911   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.247523   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249025   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249503   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.250928   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:41.254170   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:41.254181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:41.321082   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:41.321101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:43.851964   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:43.862187   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:43.862247   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:43.886923   54335 cri.go:89] found id: ""
	I1205 06:31:43.886937   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.886944   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:43.886950   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:43.887009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:43.912496   54335 cri.go:89] found id: ""
	I1205 06:31:43.912509   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.912516   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:43.912521   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:43.912579   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:43.936914   54335 cri.go:89] found id: ""
	I1205 06:31:43.936928   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.936938   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:43.936943   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:43.937000   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:43.961282   54335 cri.go:89] found id: ""
	I1205 06:31:43.961297   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.961304   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:43.961314   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:43.961378   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:43.988380   54335 cri.go:89] found id: ""
	I1205 06:31:43.988394   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.988401   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:43.988406   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:43.988464   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:44.020415   54335 cri.go:89] found id: ""
	I1205 06:31:44.020429   54335 logs.go:282] 0 containers: []
	W1205 06:31:44.020437   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:44.020442   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:44.020501   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:44.045852   54335 cri.go:89] found id: ""
	I1205 06:31:44.045866   54335 logs.go:282] 0 containers: []
	W1205 06:31:44.045873   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:44.045881   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:44.045894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:44.056666   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:44.056681   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:44.135868   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:44.126530   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.127194   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129371   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129954   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.131639   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:44.126530   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.127194   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129371   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129954   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.131639   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:44.135879   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:44.135890   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:44.204481   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:44.204500   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:44.232917   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:44.232935   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:46.789779   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:46.799818   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:46.799875   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:46.823971   54335 cri.go:89] found id: ""
	I1205 06:31:46.823985   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.823992   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:46.823998   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:46.824061   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:46.848342   54335 cri.go:89] found id: ""
	I1205 06:31:46.848356   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.848363   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:46.848368   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:46.848425   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:46.873786   54335 cri.go:89] found id: ""
	I1205 06:31:46.873800   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.873807   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:46.873812   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:46.873873   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:46.903465   54335 cri.go:89] found id: ""
	I1205 06:31:46.903479   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.903487   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:46.903492   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:46.903549   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:46.932432   54335 cri.go:89] found id: ""
	I1205 06:31:46.932446   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.932453   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:46.932458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:46.932518   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:46.957671   54335 cri.go:89] found id: ""
	I1205 06:31:46.957684   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.957692   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:46.957697   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:46.957760   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:46.983050   54335 cri.go:89] found id: ""
	I1205 06:31:46.983063   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.983077   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:46.983085   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:46.983095   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:47.042088   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:47.042105   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:47.053482   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:47.053498   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:47.131108   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:47.122748   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.123420   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125206   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125739   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.127319   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:47.122748   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.123420   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125206   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125739   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.127319   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:47.131117   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:47.131128   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:47.204434   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:47.204452   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:49.735640   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:49.745807   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:49.745868   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:49.770984   54335 cri.go:89] found id: ""
	I1205 06:31:49.770997   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.771004   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:49.771009   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:49.771072   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:49.795524   54335 cri.go:89] found id: ""
	I1205 06:31:49.795538   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.795545   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:49.795550   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:49.795605   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:49.820126   54335 cri.go:89] found id: ""
	I1205 06:31:49.820140   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.820147   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:49.820152   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:49.820209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:49.844379   54335 cri.go:89] found id: ""
	I1205 06:31:49.844392   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.844401   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:49.844408   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:49.844465   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:49.871132   54335 cri.go:89] found id: ""
	I1205 06:31:49.871144   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.871152   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:49.871157   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:49.871214   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:49.894867   54335 cri.go:89] found id: ""
	I1205 06:31:49.894880   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.894887   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:49.894893   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:49.894949   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:49.920144   54335 cri.go:89] found id: ""
	I1205 06:31:49.920157   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.920164   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:49.920171   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:49.920181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:49.979573   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:49.979595   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:49.990405   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:49.990420   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:50.061353   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:50.052917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.053917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.055577   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.056112   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.057715   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:50.052917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.053917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.055577   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.056112   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.057715   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:50.061364   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:50.061376   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:50.139097   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:50.139131   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:52.678459   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:52.688604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:52.688663   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:52.712686   54335 cri.go:89] found id: ""
	I1205 06:31:52.712700   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.712707   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:52.712712   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:52.712774   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:52.746954   54335 cri.go:89] found id: ""
	I1205 06:31:52.746968   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.746975   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:52.746980   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:52.747039   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:52.771325   54335 cri.go:89] found id: ""
	I1205 06:31:52.771338   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.771345   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:52.771350   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:52.771406   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:52.795882   54335 cri.go:89] found id: ""
	I1205 06:31:52.795896   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.795902   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:52.795908   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:52.795965   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:52.820064   54335 cri.go:89] found id: ""
	I1205 06:31:52.820079   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.820085   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:52.820090   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:52.820150   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:52.848297   54335 cri.go:89] found id: ""
	I1205 06:31:52.848311   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.848317   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:52.848323   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:52.848381   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:52.876041   54335 cri.go:89] found id: ""
	I1205 06:31:52.876055   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.876062   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:52.876069   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:52.876079   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:52.931790   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:52.931811   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:52.942929   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:52.942944   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:53.007664   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:52.997863   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:52.998579   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.000280   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.001013   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.002974   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:52.997863   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:52.998579   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.000280   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.001013   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.002974   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:53.007675   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:53.007686   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:53.073695   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:53.073712   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:55.610763   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:55.620883   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:55.620945   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:55.645677   54335 cri.go:89] found id: ""
	I1205 06:31:55.645691   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.645698   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:55.645703   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:55.645763   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:55.670962   54335 cri.go:89] found id: ""
	I1205 06:31:55.670975   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.670982   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:55.670987   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:55.671045   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:55.695354   54335 cri.go:89] found id: ""
	I1205 06:31:55.695367   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.695374   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:55.695379   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:55.695447   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:55.719264   54335 cri.go:89] found id: ""
	I1205 06:31:55.719277   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.719284   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:55.719290   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:55.719347   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:55.742928   54335 cri.go:89] found id: ""
	I1205 06:31:55.742941   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.742948   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:55.742954   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:55.743013   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:55.766643   54335 cri.go:89] found id: ""
	I1205 06:31:55.766657   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.766664   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:55.766672   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:55.766729   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:55.789985   54335 cri.go:89] found id: ""
	I1205 06:31:55.789999   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.790005   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:55.790051   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:55.790062   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:55.817984   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:55.818000   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:55.874068   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:55.874085   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:55.885873   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:55.885888   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:55.950375   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:55.941637   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.942508   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.944636   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.945456   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.946344   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:55.941637   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.942508   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.944636   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.945456   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.946344   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:55.950385   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:55.950396   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:58.513319   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:58.523187   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:58.523244   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:58.546403   54335 cri.go:89] found id: ""
	I1205 06:31:58.546416   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.546423   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:58.546429   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:58.546486   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:58.570005   54335 cri.go:89] found id: ""
	I1205 06:31:58.570019   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.570035   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:58.570040   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:58.570098   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:58.594200   54335 cri.go:89] found id: ""
	I1205 06:31:58.594214   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.594220   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:58.594225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:58.594284   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:58.618421   54335 cri.go:89] found id: ""
	I1205 06:31:58.618434   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.618440   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:58.618445   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:58.618499   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:58.642656   54335 cri.go:89] found id: ""
	I1205 06:31:58.642669   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.642676   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:58.642682   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:58.642742   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:58.667838   54335 cri.go:89] found id: ""
	I1205 06:31:58.667850   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.667858   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:58.667863   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:58.667933   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:58.695900   54335 cri.go:89] found id: ""
	I1205 06:31:58.695914   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.695921   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:58.695929   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:58.695939   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:58.751191   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:58.751209   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:58.761861   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:58.761882   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:58.829503   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:58.822607   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.823005   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.824699   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.825076   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.826213   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:58.822607   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.823005   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.824699   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.825076   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.826213   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:58.829513   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:58.829524   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:58.892286   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:58.892304   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:01.420326   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:01.430350   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:01.430415   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:01.455307   54335 cri.go:89] found id: ""
	I1205 06:32:01.455320   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.455328   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:01.455333   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:01.455388   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:01.479758   54335 cri.go:89] found id: ""
	I1205 06:32:01.479771   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.479778   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:01.479784   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:01.479840   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:01.502828   54335 cri.go:89] found id: ""
	I1205 06:32:01.502841   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.502848   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:01.502853   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:01.502908   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:01.528675   54335 cri.go:89] found id: ""
	I1205 06:32:01.528688   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.528698   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:01.528704   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:01.528762   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:01.553405   54335 cri.go:89] found id: ""
	I1205 06:32:01.553419   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.553426   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:01.553431   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:01.553510   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:01.578373   54335 cri.go:89] found id: ""
	I1205 06:32:01.578387   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.578394   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:01.578400   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:01.578464   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:01.603666   54335 cri.go:89] found id: ""
	I1205 06:32:01.603689   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.603697   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:01.603704   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:01.603714   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:01.661152   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:01.661181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:01.672814   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:01.672831   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:01.736722   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:01.729093   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.729657   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731235   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731803   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.733404   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:01.729093   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.729657   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731235   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731803   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.733404   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:01.736731   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:01.736742   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:01.799762   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:01.799780   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:04.328972   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:04.339381   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:04.339441   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:04.365391   54335 cri.go:89] found id: ""
	I1205 06:32:04.365405   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.365412   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:04.365418   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:04.365487   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:04.395556   54335 cri.go:89] found id: ""
	I1205 06:32:04.395570   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.395577   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:04.395582   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:04.395640   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:04.425328   54335 cri.go:89] found id: ""
	I1205 06:32:04.425341   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.425348   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:04.425354   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:04.425420   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:04.450514   54335 cri.go:89] found id: ""
	I1205 06:32:04.450528   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.450536   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:04.450541   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:04.450604   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:04.479372   54335 cri.go:89] found id: ""
	I1205 06:32:04.479386   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.479393   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:04.479398   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:04.479459   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:04.504452   54335 cri.go:89] found id: ""
	I1205 06:32:04.504466   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.504473   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:04.504479   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:04.504539   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:04.529609   54335 cri.go:89] found id: ""
	I1205 06:32:04.529622   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.529629   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:04.529637   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:04.529649   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:04.584301   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:04.584319   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:04.595557   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:04.595572   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:04.660266   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:04.651668   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.652518   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654083   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654557   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.656089   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:04.651668   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.652518   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654083   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654557   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.656089   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:04.660277   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:04.660288   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:04.723098   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:04.723115   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:07.257738   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:07.268081   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:07.268144   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:07.292559   54335 cri.go:89] found id: ""
	I1205 06:32:07.292573   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.292580   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:07.292585   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:07.292645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:07.316782   54335 cri.go:89] found id: ""
	I1205 06:32:07.316796   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.316803   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:07.316809   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:07.316869   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:07.346176   54335 cri.go:89] found id: ""
	I1205 06:32:07.346189   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.346196   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:07.346201   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:07.346263   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:07.378787   54335 cri.go:89] found id: ""
	I1205 06:32:07.378800   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.378807   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:07.378812   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:07.378869   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:07.406652   54335 cri.go:89] found id: ""
	I1205 06:32:07.406666   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.406673   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:07.406678   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:07.406746   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:07.438624   54335 cri.go:89] found id: ""
	I1205 06:32:07.438642   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.438649   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:07.438655   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:07.438726   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:07.464230   54335 cri.go:89] found id: ""
	I1205 06:32:07.464243   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.464250   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:07.464257   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:07.464266   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:07.520945   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:07.520962   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:07.531896   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:07.531911   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:07.598302   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:07.588821   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.589395   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591106   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591684   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.594475   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:07.588821   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.589395   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591106   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591684   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.594475   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:07.598317   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:07.598327   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:07.661122   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:07.661139   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:10.190348   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:10.201225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:10.201307   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:10.230433   54335 cri.go:89] found id: ""
	I1205 06:32:10.230446   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.230453   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:10.230458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:10.230512   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:10.254051   54335 cri.go:89] found id: ""
	I1205 06:32:10.254070   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.254077   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:10.254082   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:10.254140   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:10.278518   54335 cri.go:89] found id: ""
	I1205 06:32:10.278531   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.278538   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:10.278543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:10.278599   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:10.302979   54335 cri.go:89] found id: ""
	I1205 06:32:10.302992   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.302999   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:10.303004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:10.303059   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:10.331316   54335 cri.go:89] found id: ""
	I1205 06:32:10.331330   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.331337   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:10.331341   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:10.331400   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:10.362875   54335 cri.go:89] found id: ""
	I1205 06:32:10.362889   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.362896   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:10.362902   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:10.362959   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:10.393788   54335 cri.go:89] found id: ""
	I1205 06:32:10.393802   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.393810   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:10.393818   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:10.393829   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:10.459886   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:10.452427   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.452934   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454546   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454986   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.456499   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:10.452427   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.452934   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454546   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454986   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.456499   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:10.459895   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:10.459905   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:10.521460   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:10.521481   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:10.549040   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:10.549056   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:10.605396   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:10.605414   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:13.117854   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:13.128117   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:13.128179   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:13.153085   54335 cri.go:89] found id: ""
	I1205 06:32:13.153098   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.153105   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:13.153110   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:13.153199   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:13.178442   54335 cri.go:89] found id: ""
	I1205 06:32:13.178455   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.178462   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:13.178467   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:13.178524   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:13.203207   54335 cri.go:89] found id: ""
	I1205 06:32:13.203220   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.203229   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:13.203234   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:13.203292   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:13.228073   54335 cri.go:89] found id: ""
	I1205 06:32:13.228086   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.228093   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:13.228098   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:13.228159   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:13.253259   54335 cri.go:89] found id: ""
	I1205 06:32:13.253272   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.253288   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:13.253293   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:13.253350   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:13.278480   54335 cri.go:89] found id: ""
	I1205 06:32:13.278493   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.278500   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:13.278506   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:13.278562   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:13.301934   54335 cri.go:89] found id: ""
	I1205 06:32:13.301948   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.301955   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:13.301962   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:13.301972   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:13.356855   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:13.356876   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:13.368331   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:13.368352   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:13.438131   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:13.429738   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.430489   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432231   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432823   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.434562   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:13.429738   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.430489   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432231   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432823   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.434562   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:13.438141   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:13.438151   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:13.501680   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:13.501699   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:16.032304   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:16.042939   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:16.043006   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:16.069762   54335 cri.go:89] found id: ""
	I1205 06:32:16.069775   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.069782   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:16.069788   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:16.069844   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:16.094242   54335 cri.go:89] found id: ""
	I1205 06:32:16.094255   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.094264   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:16.094270   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:16.094336   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:16.120352   54335 cri.go:89] found id: ""
	I1205 06:32:16.120366   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.120373   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:16.120378   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:16.120435   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:16.149183   54335 cri.go:89] found id: ""
	I1205 06:32:16.149196   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.149203   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:16.149208   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:16.149270   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:16.179309   54335 cri.go:89] found id: ""
	I1205 06:32:16.179322   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.179328   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:16.179333   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:16.179388   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:16.204104   54335 cri.go:89] found id: ""
	I1205 06:32:16.204118   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.204125   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:16.204130   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:16.204190   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:16.230914   54335 cri.go:89] found id: ""
	I1205 06:32:16.230927   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.230934   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:16.230941   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:16.230950   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:16.286405   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:16.286423   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:16.297122   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:16.297136   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:16.367421   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:16.357623   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.358522   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360113   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360727   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.362335   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:16.357623   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.358522   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360113   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360727   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.362335   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:16.367430   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:16.367442   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:16.452050   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:16.452076   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:18.982231   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:18.992354   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:18.992412   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:19.017989   54335 cri.go:89] found id: ""
	I1205 06:32:19.018004   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.018011   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:19.018016   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:19.018077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:19.042217   54335 cri.go:89] found id: ""
	I1205 06:32:19.042230   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.042237   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:19.042242   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:19.042301   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:19.066699   54335 cri.go:89] found id: ""
	I1205 06:32:19.066713   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.066720   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:19.066725   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:19.066785   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:19.095590   54335 cri.go:89] found id: ""
	I1205 06:32:19.095603   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.095610   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:19.095616   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:19.095672   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:19.119155   54335 cri.go:89] found id: ""
	I1205 06:32:19.119169   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.119176   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:19.119181   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:19.119237   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:19.142787   54335 cri.go:89] found id: ""
	I1205 06:32:19.142801   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.142807   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:19.142813   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:19.142873   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:19.168013   54335 cri.go:89] found id: ""
	I1205 06:32:19.168025   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.168032   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:19.168039   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:19.168051   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:19.178464   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:19.178481   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:19.240233   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:19.233298   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.233706   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235213   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235526   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.236960   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:19.233298   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.233706   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235213   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235526   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.236960   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:19.240244   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:19.240253   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:19.300198   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:19.300217   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:19.329682   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:19.329697   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:21.888551   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:21.898274   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:21.898337   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:21.922474   54335 cri.go:89] found id: ""
	I1205 06:32:21.922486   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.922493   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:21.922498   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:21.922558   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:21.950761   54335 cri.go:89] found id: ""
	I1205 06:32:21.950775   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.950781   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:21.950786   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:21.950844   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:21.973829   54335 cri.go:89] found id: ""
	I1205 06:32:21.973843   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.973849   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:21.973854   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:21.973912   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:21.997620   54335 cri.go:89] found id: ""
	I1205 06:32:21.997634   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.997641   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:21.997647   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:21.997702   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:22.033207   54335 cri.go:89] found id: ""
	I1205 06:32:22.033221   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.033228   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:22.033234   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:22.033296   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:22.062888   54335 cri.go:89] found id: ""
	I1205 06:32:22.062902   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.062909   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:22.062915   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:22.062973   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:22.091975   54335 cri.go:89] found id: ""
	I1205 06:32:22.091989   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.091996   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:22.092004   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:22.092017   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:22.103145   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:22.103160   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:22.164851   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:22.156849   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.157640   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159268   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159573   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.161063   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:22.156849   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.157640   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159268   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159573   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.161063   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:22.164860   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:22.164870   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:22.226105   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:22.226124   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:22.253915   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:22.253929   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:24.811993   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:24.821806   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:24.821865   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:24.845836   54335 cri.go:89] found id: ""
	I1205 06:32:24.845850   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.845857   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:24.845864   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:24.845919   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:24.870475   54335 cri.go:89] found id: ""
	I1205 06:32:24.870489   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.870496   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:24.870505   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:24.870560   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:24.895049   54335 cri.go:89] found id: ""
	I1205 06:32:24.895061   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.895068   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:24.895074   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:24.895130   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:24.924307   54335 cri.go:89] found id: ""
	I1205 06:32:24.924320   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.924327   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:24.924332   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:24.924390   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:24.949595   54335 cri.go:89] found id: ""
	I1205 06:32:24.949608   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.949616   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:24.949621   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:24.949680   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:24.974582   54335 cri.go:89] found id: ""
	I1205 06:32:24.974595   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.974602   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:24.974607   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:24.974664   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:25.003723   54335 cri.go:89] found id: ""
	I1205 06:32:25.003739   54335 logs.go:282] 0 containers: []
	W1205 06:32:25.003747   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:25.003755   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:25.003766   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:25.065829   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:25.065846   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:25.077220   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:25.077236   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:25.140111   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:25.132731   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.133376   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.134862   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.135186   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.136712   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:25.132731   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.133376   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.134862   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.135186   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.136712   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:25.140121   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:25.140135   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:25.206118   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:25.206137   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:27.733938   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:27.744224   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:27.744282   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:27.769011   54335 cri.go:89] found id: ""
	I1205 06:32:27.769024   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.769031   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:27.769036   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:27.769094   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:27.793434   54335 cri.go:89] found id: ""
	I1205 06:32:27.793448   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.793455   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:27.793460   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:27.793556   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:27.821088   54335 cri.go:89] found id: ""
	I1205 06:32:27.821101   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.821108   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:27.821112   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:27.821209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:27.847229   54335 cri.go:89] found id: ""
	I1205 06:32:27.847242   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.847249   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:27.847254   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:27.847310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:27.870944   54335 cri.go:89] found id: ""
	I1205 06:32:27.870958   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.870965   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:27.870970   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:27.871031   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:27.895361   54335 cri.go:89] found id: ""
	I1205 06:32:27.895375   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.895382   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:27.895388   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:27.895445   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:27.920868   54335 cri.go:89] found id: ""
	I1205 06:32:27.920881   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.920888   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:27.920897   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:27.920908   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:27.984326   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:27.984346   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:28.018053   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:28.018070   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:28.075646   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:28.075663   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:28.087097   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:28.087112   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:28.151403   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:28.143072   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.143826   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.145655   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.146333   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.147993   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:28.143072   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.143826   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.145655   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.146333   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.147993   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:30.651598   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:30.661458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:30.661527   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:30.689413   54335 cri.go:89] found id: ""
	I1205 06:32:30.689426   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.689443   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:30.689450   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:30.689523   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:30.712971   54335 cri.go:89] found id: ""
	I1205 06:32:30.712987   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.712994   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:30.712999   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:30.713057   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:30.737851   54335 cri.go:89] found id: ""
	I1205 06:32:30.737871   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.737879   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:30.737884   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:30.737945   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:30.761745   54335 cri.go:89] found id: ""
	I1205 06:32:30.761759   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.761766   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:30.761771   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:30.761836   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:30.784898   54335 cri.go:89] found id: ""
	I1205 06:32:30.784912   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.784919   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:30.784924   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:30.784980   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:30.810894   54335 cri.go:89] found id: ""
	I1205 06:32:30.810908   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.810915   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:30.810920   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:30.810976   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:30.839604   54335 cri.go:89] found id: ""
	I1205 06:32:30.839617   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.839623   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:30.839636   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:30.839647   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:30.865641   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:30.865658   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:30.921606   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:30.921625   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:30.932281   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:30.932297   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:30.995168   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:30.987715   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.988222   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.989765   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.990119   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.991752   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:30.987715   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.988222   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.989765   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.990119   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.991752   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:30.995177   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:30.995187   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:33.558401   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:33.568813   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:33.568893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:33.596483   54335 cri.go:89] found id: ""
	I1205 06:32:33.596496   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.596503   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:33.596508   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:33.596566   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:33.624025   54335 cri.go:89] found id: ""
	I1205 06:32:33.624039   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.624046   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:33.624051   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:33.624108   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:33.655953   54335 cri.go:89] found id: ""
	I1205 06:32:33.655966   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.655974   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:33.655979   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:33.656039   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:33.684431   54335 cri.go:89] found id: ""
	I1205 06:32:33.684445   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.684452   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:33.684458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:33.684517   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:33.710631   54335 cri.go:89] found id: ""
	I1205 06:32:33.710644   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.710651   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:33.710656   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:33.710714   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:33.735367   54335 cri.go:89] found id: ""
	I1205 06:32:33.735380   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.735387   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:33.735393   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:33.735450   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:33.759636   54335 cri.go:89] found id: ""
	I1205 06:32:33.759650   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.759657   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:33.759664   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:33.759675   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:33.814547   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:33.814565   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:33.825805   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:33.825820   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:33.891604   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:33.884022   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.884634   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886235   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886812   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.888278   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:33.884022   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.884634   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886235   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886812   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.888278   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:33.891614   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:33.891624   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:33.953767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:33.953787   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:36.482228   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:36.492694   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:36.492753   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:36.518206   54335 cri.go:89] found id: ""
	I1205 06:32:36.518222   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.518229   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:36.518233   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:36.518290   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:36.543531   54335 cri.go:89] found id: ""
	I1205 06:32:36.543544   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.543551   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:36.543556   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:36.543615   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:36.567286   54335 cri.go:89] found id: ""
	I1205 06:32:36.567299   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.567306   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:36.567311   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:36.567367   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:36.592165   54335 cri.go:89] found id: ""
	I1205 06:32:36.592178   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.592185   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:36.592190   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:36.592246   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:36.621238   54335 cri.go:89] found id: ""
	I1205 06:32:36.621251   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.621258   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:36.621264   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:36.621329   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:36.646816   54335 cri.go:89] found id: ""
	I1205 06:32:36.646838   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.646845   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:36.646850   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:36.646917   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:36.672562   54335 cri.go:89] found id: ""
	I1205 06:32:36.672575   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.672582   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:36.672599   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:36.672609   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:36.727909   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:36.727926   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:36.738625   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:36.738641   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:36.803851   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:36.795935   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.796356   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.797950   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.798308   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.800017   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:36.795935   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.796356   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.797950   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.798308   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.800017   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:36.803861   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:36.803872   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:36.865831   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:36.865849   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:39.393852   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:39.404022   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:39.404090   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:39.433108   54335 cri.go:89] found id: ""
	I1205 06:32:39.433122   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.433129   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:39.433134   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:39.433218   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:39.458840   54335 cri.go:89] found id: ""
	I1205 06:32:39.458853   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.458862   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:39.458867   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:39.458923   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:39.483121   54335 cri.go:89] found id: ""
	I1205 06:32:39.483135   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.483142   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:39.483147   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:39.483203   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:39.508080   54335 cri.go:89] found id: ""
	I1205 06:32:39.508092   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.508100   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:39.508107   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:39.508166   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:39.532483   54335 cri.go:89] found id: ""
	I1205 06:32:39.532496   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.532503   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:39.532508   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:39.532563   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:39.556203   54335 cri.go:89] found id: ""
	I1205 06:32:39.556217   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.556224   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:39.556229   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:39.556286   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:39.579787   54335 cri.go:89] found id: ""
	I1205 06:32:39.579802   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.579809   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:39.579818   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:39.579828   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:39.644828   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:39.644847   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:39.657327   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:39.657341   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:39.724034   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:39.716361   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.716905   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718372   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718880   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.720306   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:39.716361   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.716905   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718372   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718880   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.720306   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:39.724044   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:39.724054   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:39.786205   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:39.786224   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:42.317043   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:42.327925   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:42.327988   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:42.353925   54335 cri.go:89] found id: ""
	I1205 06:32:42.353939   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.353946   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:42.353952   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:42.354013   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:42.385300   54335 cri.go:89] found id: ""
	I1205 06:32:42.385314   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.385321   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:42.385326   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:42.385385   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:42.411306   54335 cri.go:89] found id: ""
	I1205 06:32:42.411319   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.411326   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:42.411331   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:42.411389   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:42.436499   54335 cri.go:89] found id: ""
	I1205 06:32:42.436513   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.436520   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:42.436526   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:42.436590   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:42.461983   54335 cri.go:89] found id: ""
	I1205 06:32:42.462000   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.462008   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:42.462013   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:42.462072   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:42.490948   54335 cri.go:89] found id: ""
	I1205 06:32:42.490962   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.490971   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:42.490976   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:42.491036   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:42.515766   54335 cri.go:89] found id: ""
	I1205 06:32:42.515785   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.515793   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:42.515800   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:42.515810   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:42.571249   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:42.571267   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:42.582146   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:42.582161   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:42.671227   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:42.659945   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.660555   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.665688   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.666233   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.667791   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:42.659945   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.660555   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.665688   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.666233   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.667791   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:42.671236   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:42.671247   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:42.733761   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:42.733780   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:45.261718   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:45.276631   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:45.276700   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:45.305280   54335 cri.go:89] found id: ""
	I1205 06:32:45.305296   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.305304   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:45.305309   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:45.305375   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:45.332314   54335 cri.go:89] found id: ""
	I1205 06:32:45.332407   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.332482   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:45.332488   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:45.332551   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:45.368080   54335 cri.go:89] found id: ""
	I1205 06:32:45.368141   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.368165   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:45.368171   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:45.368336   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:45.400257   54335 cri.go:89] found id: ""
	I1205 06:32:45.400284   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.400292   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:45.400298   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:45.400368   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:45.425301   54335 cri.go:89] found id: ""
	I1205 06:32:45.425314   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.425321   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:45.425327   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:45.425385   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:45.450756   54335 cri.go:89] found id: ""
	I1205 06:32:45.450769   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.450777   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:45.450782   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:45.450845   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:45.481391   54335 cri.go:89] found id: ""
	I1205 06:32:45.481405   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.481413   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:45.481421   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:45.481441   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:45.539446   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:45.539465   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:45.550849   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:45.550865   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:45.628789   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:45.621303   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.621758   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623274   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623572   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.625028   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:45.621303   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.621758   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623274   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623572   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.625028   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:45.628800   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:45.628810   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:45.699540   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:45.699558   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:48.227049   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:48.237481   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:48.237550   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:48.267696   54335 cri.go:89] found id: ""
	I1205 06:32:48.267709   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.267716   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:48.267721   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:48.267789   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:48.294097   54335 cri.go:89] found id: ""
	I1205 06:32:48.294112   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.294118   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:48.294124   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:48.294186   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:48.324117   54335 cri.go:89] found id: ""
	I1205 06:32:48.324131   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.324139   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:48.324144   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:48.324203   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:48.349743   54335 cri.go:89] found id: ""
	I1205 06:32:48.349758   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.349765   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:48.349781   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:48.349849   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:48.379197   54335 cri.go:89] found id: ""
	I1205 06:32:48.379211   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.379219   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:48.379225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:48.379283   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:48.404472   54335 cri.go:89] found id: ""
	I1205 06:32:48.404486   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.404493   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:48.404499   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:48.404555   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:48.430058   54335 cri.go:89] found id: ""
	I1205 06:32:48.430072   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.430079   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:48.430086   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:48.430099   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:48.459503   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:48.459519   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:48.518141   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:48.518158   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:48.529014   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:48.529031   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:48.601337   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:48.590875   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.591327   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.593747   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.595586   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.596337   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:48.590875   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.591327   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.593747   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.595586   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.596337   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:48.601347   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:48.601357   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:51.177615   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:51.187543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:51.187599   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:51.218589   54335 cri.go:89] found id: ""
	I1205 06:32:51.218603   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.218610   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:51.218615   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:51.218673   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:51.243490   54335 cri.go:89] found id: ""
	I1205 06:32:51.243509   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.243516   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:51.243521   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:51.243577   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:51.268372   54335 cri.go:89] found id: ""
	I1205 06:32:51.268385   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.268393   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:51.268398   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:51.268458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:51.292432   54335 cri.go:89] found id: ""
	I1205 06:32:51.292445   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.292452   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:51.292457   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:51.292513   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:51.316338   54335 cri.go:89] found id: ""
	I1205 06:32:51.316351   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.316358   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:51.316364   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:51.316419   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:51.341611   54335 cri.go:89] found id: ""
	I1205 06:32:51.341625   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.341645   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:51.341650   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:51.341708   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:51.365650   54335 cri.go:89] found id: ""
	I1205 06:32:51.365664   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.365671   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:51.365679   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:51.365690   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:51.377639   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:51.377655   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:51.443518   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:51.435665   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.436407   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438103   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438498   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.439930   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:51.435665   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.436407   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438103   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438498   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.439930   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:51.443527   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:51.443540   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:51.505744   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:51.505763   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:51.532869   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:51.532884   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:54.096225   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:54.106698   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:54.106760   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:54.134689   54335 cri.go:89] found id: ""
	I1205 06:32:54.134702   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.134709   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:54.134714   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:54.134769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:54.158113   54335 cri.go:89] found id: ""
	I1205 06:32:54.158126   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.158133   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:54.158138   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:54.158199   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:54.182422   54335 cri.go:89] found id: ""
	I1205 06:32:54.182436   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.182444   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:54.182448   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:54.182508   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:54.206399   54335 cri.go:89] found id: ""
	I1205 06:32:54.206412   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.206418   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:54.206423   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:54.206481   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:54.229926   54335 cri.go:89] found id: ""
	I1205 06:32:54.229940   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.229947   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:54.229952   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:54.230011   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:54.254356   54335 cri.go:89] found id: ""
	I1205 06:32:54.254370   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.254377   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:54.254382   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:54.254441   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:54.278495   54335 cri.go:89] found id: ""
	I1205 06:32:54.278508   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.278516   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:54.278523   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:54.278533   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:54.305603   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:54.305619   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:54.360184   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:54.360202   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:54.371510   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:54.371525   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:54.438927   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:54.429388   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.430239   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.432334   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.433110   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.435152   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:54.429388   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.430239   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.432334   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.433110   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.435152   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:54.438936   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:54.438947   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:57.002913   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:57.020172   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:57.020235   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:57.044543   54335 cri.go:89] found id: ""
	I1205 06:32:57.044556   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.044564   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:57.044570   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:57.044629   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:57.070053   54335 cri.go:89] found id: ""
	I1205 06:32:57.070067   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.070074   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:57.070079   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:57.070134   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:57.094644   54335 cri.go:89] found id: ""
	I1205 06:32:57.094659   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.094666   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:57.094670   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:57.094769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:57.118698   54335 cri.go:89] found id: ""
	I1205 06:32:57.118722   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.118729   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:57.118734   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:57.118799   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:57.142854   54335 cri.go:89] found id: ""
	I1205 06:32:57.142868   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.142875   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:57.142881   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:57.142946   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:57.171220   54335 cri.go:89] found id: ""
	I1205 06:32:57.171234   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.171241   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:57.171246   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:57.171311   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:57.195529   54335 cri.go:89] found id: ""
	I1205 06:32:57.195544   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.195551   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:57.195558   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:57.195578   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:57.251284   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:57.251305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:57.262555   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:57.262570   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:57.333629   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:57.326387   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.326886   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328440   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328930   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.330375   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:57.326387   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.326886   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328440   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328930   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.330375   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:57.333638   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:57.333651   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:57.394773   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:57.394791   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:59.923047   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:59.933128   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:59.933207   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:59.960876   54335 cri.go:89] found id: ""
	I1205 06:32:59.960890   54335 logs.go:282] 0 containers: []
	W1205 06:32:59.960896   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:59.960901   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:59.960961   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:59.985649   54335 cri.go:89] found id: ""
	I1205 06:32:59.985664   54335 logs.go:282] 0 containers: []
	W1205 06:32:59.985671   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:59.985676   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:59.985737   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:00.069985   54335 cri.go:89] found id: ""
	I1205 06:33:00.070002   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.070019   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:00.070026   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:00.070103   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:00.156917   54335 cri.go:89] found id: ""
	I1205 06:33:00.156936   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.156945   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:00.156958   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:00.157043   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:00.284647   54335 cri.go:89] found id: ""
	I1205 06:33:00.284663   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.284672   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:00.284678   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:00.284758   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:00.335248   54335 cri.go:89] found id: ""
	I1205 06:33:00.335263   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.335271   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:00.335280   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:00.335365   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:00.377235   54335 cri.go:89] found id: ""
	I1205 06:33:00.377251   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.377259   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:00.377267   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:00.377291   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:00.390543   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:00.390561   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:00.464312   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:00.454965   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.455845   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.457669   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.458537   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.460402   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:00.454965   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.455845   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.457669   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.458537   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.460402   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:00.464323   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:00.464334   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:00.528767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:00.528786   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:00.562265   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:00.562282   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:03.126784   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:03.137248   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:03.137309   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:03.163136   54335 cri.go:89] found id: ""
	I1205 06:33:03.163149   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.163156   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:03.163161   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:03.163221   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:03.189239   54335 cri.go:89] found id: ""
	I1205 06:33:03.189253   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.189261   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:03.189277   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:03.189340   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:03.215019   54335 cri.go:89] found id: ""
	I1205 06:33:03.215032   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.215039   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:03.215045   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:03.215104   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:03.240336   54335 cri.go:89] found id: ""
	I1205 06:33:03.240350   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.240357   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:03.240362   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:03.240421   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:03.264735   54335 cri.go:89] found id: ""
	I1205 06:33:03.264749   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.264762   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:03.264767   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:03.264831   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:03.289528   54335 cri.go:89] found id: ""
	I1205 06:33:03.289541   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.289548   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:03.289553   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:03.289658   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:03.315032   54335 cri.go:89] found id: ""
	I1205 06:33:03.315046   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.315053   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:03.315060   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:03.315071   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:03.371569   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:03.371588   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:03.382809   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:03.382825   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:03.450556   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:03.442547   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.443142   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445000   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445833   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.446990   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:03.442547   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.443142   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445000   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445833   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.446990   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:03.450566   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:03.450577   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:03.516929   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:03.516948   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:06.046009   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:06.057281   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:06.057355   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:06.084601   54335 cri.go:89] found id: ""
	I1205 06:33:06.084615   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.084623   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:06.084629   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:06.084690   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:06.111286   54335 cri.go:89] found id: ""
	I1205 06:33:06.111300   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.111307   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:06.111313   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:06.111374   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:06.136965   54335 cri.go:89] found id: ""
	I1205 06:33:06.136978   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.136985   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:06.136990   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:06.137048   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:06.162299   54335 cri.go:89] found id: ""
	I1205 06:33:06.162312   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.162319   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:06.162325   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:06.162387   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:06.189555   54335 cri.go:89] found id: ""
	I1205 06:33:06.189569   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.189576   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:06.189581   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:06.189645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:06.215170   54335 cri.go:89] found id: ""
	I1205 06:33:06.215184   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.215192   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:06.215198   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:06.215258   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:06.241073   54335 cri.go:89] found id: ""
	I1205 06:33:06.241087   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.241094   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:06.241112   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:06.241123   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:06.296188   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:06.296205   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:06.306926   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:06.306941   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:06.371295   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:06.363444   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.364162   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.365700   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.366364   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.367956   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:06.363444   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.364162   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.365700   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.366364   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.367956   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:06.371304   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:06.371316   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:06.432933   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:06.432951   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:08.969294   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:08.979402   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:08.979463   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:09.020683   54335 cri.go:89] found id: ""
	I1205 06:33:09.020697   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.020704   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:09.020710   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:09.020771   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:09.046109   54335 cri.go:89] found id: ""
	I1205 06:33:09.046123   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.046130   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:09.046136   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:09.046195   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:09.070968   54335 cri.go:89] found id: ""
	I1205 06:33:09.070981   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.070988   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:09.070995   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:09.071056   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:09.096098   54335 cri.go:89] found id: ""
	I1205 06:33:09.096111   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.096118   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:09.096123   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:09.096226   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:09.121468   54335 cri.go:89] found id: ""
	I1205 06:33:09.121482   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.121489   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:09.121495   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:09.121573   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:09.150975   54335 cri.go:89] found id: ""
	I1205 06:33:09.150989   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.150997   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:09.151004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:09.151063   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:09.176504   54335 cri.go:89] found id: ""
	I1205 06:33:09.176517   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.176527   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:09.176534   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:09.176545   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:09.203288   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:09.203302   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:09.259402   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:09.259423   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:09.270454   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:09.270470   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:09.334084   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:09.326438   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.326872   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328506   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328861   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.330463   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:09.326438   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.326872   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328506   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328861   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.330463   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:09.334095   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:09.334105   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:11.894816   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:11.904810   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:11.904871   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:11.930015   54335 cri.go:89] found id: ""
	I1205 06:33:11.930029   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.930036   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:11.930042   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:11.930100   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:11.954795   54335 cri.go:89] found id: ""
	I1205 06:33:11.954808   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.954815   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:11.954821   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:11.954877   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:11.978195   54335 cri.go:89] found id: ""
	I1205 06:33:11.978208   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.978231   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:11.978236   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:11.978292   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:12.003210   54335 cri.go:89] found id: ""
	I1205 06:33:12.003227   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.003235   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:12.003241   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:12.003326   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:12.033020   54335 cri.go:89] found id: ""
	I1205 06:33:12.033034   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.033041   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:12.033046   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:12.033111   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:12.058060   54335 cri.go:89] found id: ""
	I1205 06:33:12.058073   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.058081   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:12.058086   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:12.058143   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:12.082699   54335 cri.go:89] found id: ""
	I1205 06:33:12.082713   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.082719   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:12.082727   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:12.082737   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:12.151250   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:12.142947   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.143602   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145353   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145952   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.147593   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:12.142947   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.143602   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145353   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145952   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.147593   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:12.151259   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:12.151271   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:12.218438   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:12.218461   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:12.248241   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:12.248260   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:12.307820   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:12.307838   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:14.820623   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:14.830697   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:14.830756   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:14.863478   54335 cri.go:89] found id: ""
	I1205 06:33:14.863492   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.863499   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:14.863504   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:14.863565   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:14.895084   54335 cri.go:89] found id: ""
	I1205 06:33:14.895098   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.895106   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:14.895111   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:14.895172   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:14.925468   54335 cri.go:89] found id: ""
	I1205 06:33:14.925482   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.925489   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:14.925494   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:14.925614   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:14.954925   54335 cri.go:89] found id: ""
	I1205 06:33:14.954938   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.954945   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:14.954950   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:14.955009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:14.980066   54335 cri.go:89] found id: ""
	I1205 06:33:14.980080   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.980088   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:14.980093   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:14.980152   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:15.028743   54335 cri.go:89] found id: ""
	I1205 06:33:15.028763   54335 logs.go:282] 0 containers: []
	W1205 06:33:15.028770   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:15.028777   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:15.028845   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:15.057623   54335 cri.go:89] found id: ""
	I1205 06:33:15.057636   54335 logs.go:282] 0 containers: []
	W1205 06:33:15.057643   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:15.057650   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:15.057661   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:15.114789   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:15.114808   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:15.126224   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:15.126240   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:15.193033   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:15.184929   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.185789   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187491   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187821   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.189530   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:15.184929   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.185789   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187491   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187821   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.189530   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:15.193044   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:15.193054   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:15.256748   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:15.256767   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:17.786454   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:17.796729   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:17.796787   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:17.825815   54335 cri.go:89] found id: ""
	I1205 06:33:17.825828   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.825835   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:17.825840   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:17.825900   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:17.855661   54335 cri.go:89] found id: ""
	I1205 06:33:17.855675   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.855682   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:17.855687   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:17.855744   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:17.883175   54335 cri.go:89] found id: ""
	I1205 06:33:17.883188   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.883195   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:17.883200   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:17.883260   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:17.911578   54335 cri.go:89] found id: ""
	I1205 06:33:17.911592   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.911599   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:17.911604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:17.911662   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:17.939731   54335 cri.go:89] found id: ""
	I1205 06:33:17.939750   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.939758   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:17.939763   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:17.939818   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:17.968310   54335 cri.go:89] found id: ""
	I1205 06:33:17.968323   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.968330   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:17.968335   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:17.968392   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:17.992739   54335 cri.go:89] found id: ""
	I1205 06:33:17.992752   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.992759   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:17.992765   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:17.992776   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:18.006966   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:18.006985   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:18.077932   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:18.067988   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.068697   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.071196   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.072003   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.073192   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:18.067988   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.068697   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.071196   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.072003   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.073192   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:18.077943   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:18.077954   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:18.141190   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:18.141206   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:18.172978   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:18.172995   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:20.730714   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:20.741267   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:20.741329   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:20.765738   54335 cri.go:89] found id: ""
	I1205 06:33:20.765751   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.765758   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:20.765763   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:20.765821   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:20.790360   54335 cri.go:89] found id: ""
	I1205 06:33:20.790373   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.790380   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:20.790385   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:20.790446   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:20.815276   54335 cri.go:89] found id: ""
	I1205 06:33:20.815290   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.815297   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:20.815302   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:20.815361   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:20.840257   54335 cri.go:89] found id: ""
	I1205 06:33:20.840270   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.840277   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:20.840283   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:20.840345   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:20.869989   54335 cri.go:89] found id: ""
	I1205 06:33:20.870003   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.870010   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:20.870015   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:20.870077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:20.908890   54335 cri.go:89] found id: ""
	I1205 06:33:20.908903   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.908915   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:20.908921   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:20.908978   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:20.935421   54335 cri.go:89] found id: ""
	I1205 06:33:20.935435   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.935442   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:20.935450   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:20.935460   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:20.946582   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:20.946597   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:21.010138   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:20.999742   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.000436   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.002782   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.003697   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.005641   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:20.999742   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.000436   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.002782   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.003697   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.005641   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:21.010149   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:21.010172   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:21.077392   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:21.077409   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:21.105240   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:21.105255   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:23.662909   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:23.672961   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:23.673022   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:23.697989   54335 cri.go:89] found id: ""
	I1205 06:33:23.698003   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.698010   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:23.698016   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:23.698078   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:23.723698   54335 cri.go:89] found id: ""
	I1205 06:33:23.723712   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.723718   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:23.723723   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:23.723781   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:23.747403   54335 cri.go:89] found id: ""
	I1205 06:33:23.747416   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.747423   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:23.747428   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:23.747486   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:23.775201   54335 cri.go:89] found id: ""
	I1205 06:33:23.775214   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.775221   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:23.775227   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:23.775290   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:23.799494   54335 cri.go:89] found id: ""
	I1205 06:33:23.799507   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.799514   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:23.799519   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:23.799575   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:23.824229   54335 cri.go:89] found id: ""
	I1205 06:33:23.824242   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.824249   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:23.824254   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:23.824310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:23.851738   54335 cri.go:89] found id: ""
	I1205 06:33:23.851752   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.851759   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:23.851767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:23.851777   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:23.897695   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:23.897710   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:23.961464   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:23.961482   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:23.972542   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:23.972558   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:24.046391   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:24.038441   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.039274   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.040964   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.041464   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.043066   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:24.038441   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.039274   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.040964   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.041464   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.043066   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:24.046402   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:24.046414   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:26.611978   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:26.621743   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:26.621802   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:26.645855   54335 cri.go:89] found id: ""
	I1205 06:33:26.645868   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.645875   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:26.645879   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:26.645934   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:26.675349   54335 cri.go:89] found id: ""
	I1205 06:33:26.675363   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.675369   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:26.675374   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:26.675430   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:26.698540   54335 cri.go:89] found id: ""
	I1205 06:33:26.698554   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.698561   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:26.698566   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:26.698630   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:26.721264   54335 cri.go:89] found id: ""
	I1205 06:33:26.721277   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.721283   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:26.721288   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:26.721343   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:26.744526   54335 cri.go:89] found id: ""
	I1205 06:33:26.744539   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.744546   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:26.744551   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:26.744607   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:26.767695   54335 cri.go:89] found id: ""
	I1205 06:33:26.767719   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.767727   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:26.767732   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:26.767792   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:26.791289   54335 cri.go:89] found id: ""
	I1205 06:33:26.791329   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.791336   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:26.791344   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:26.791354   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:26.856152   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:26.845400   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.846423   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.848401   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.849234   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.850202   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:26.845400   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.846423   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.848401   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.849234   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.850202   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:26.856162   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:26.856173   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:26.930967   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:26.930987   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:26.958183   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:26.958200   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:27.015910   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:27.015927   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:29.527097   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:29.537027   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:29.537087   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:29.561570   54335 cri.go:89] found id: ""
	I1205 06:33:29.561583   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.561591   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:29.561598   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:29.561655   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:29.586431   54335 cri.go:89] found id: ""
	I1205 06:33:29.586445   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.586452   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:29.586474   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:29.586543   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:29.615124   54335 cri.go:89] found id: ""
	I1205 06:33:29.615139   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.615145   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:29.615151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:29.615208   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:29.640801   54335 cri.go:89] found id: ""
	I1205 06:33:29.640814   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.640831   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:29.640837   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:29.640893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:29.665711   54335 cri.go:89] found id: ""
	I1205 06:33:29.665725   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.665731   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:29.665737   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:29.665797   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:29.690393   54335 cri.go:89] found id: ""
	I1205 06:33:29.690416   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.690423   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:29.690428   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:29.690500   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:29.714522   54335 cri.go:89] found id: ""
	I1205 06:33:29.714535   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.714542   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:29.714550   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:29.714562   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:29.770787   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:29.770804   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:29.781149   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:29.781179   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:29.848588   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:29.838965   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.839369   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.840958   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.841406   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.842881   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:29.838965   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.839369   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.840958   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.841406   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.842881   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:29.848601   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:29.848612   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:29.927646   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:29.927665   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:32.455807   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:32.466055   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:32.466118   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:32.490796   54335 cri.go:89] found id: ""
	I1205 06:33:32.490809   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.490816   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:32.490822   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:32.490881   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:32.515490   54335 cri.go:89] found id: ""
	I1205 06:33:32.515503   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.515511   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:32.515516   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:32.515577   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:32.543147   54335 cri.go:89] found id: ""
	I1205 06:33:32.543161   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.543167   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:32.543172   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:32.543234   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:32.567288   54335 cri.go:89] found id: ""
	I1205 06:33:32.567301   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.567308   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:32.567313   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:32.567370   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:32.594765   54335 cri.go:89] found id: ""
	I1205 06:33:32.594778   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.594785   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:32.594790   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:32.594846   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:32.628174   54335 cri.go:89] found id: ""
	I1205 06:33:32.628187   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.628208   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:32.628223   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:32.628310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:32.653204   54335 cri.go:89] found id: ""
	I1205 06:33:32.653218   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.653225   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:32.653232   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:32.653242   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:32.713436   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:32.713452   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:32.723879   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:32.723894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:32.788746   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:32.780259   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.780873   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.782713   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.783204   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.784855   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:32.780259   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.780873   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.782713   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.783204   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.784855   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:32.788757   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:32.788767   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:32.850792   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:32.850809   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:35.388187   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:35.398195   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:35.398254   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:35.421975   54335 cri.go:89] found id: ""
	I1205 06:33:35.421989   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.421996   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:35.422002   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:35.422065   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:35.445920   54335 cri.go:89] found id: ""
	I1205 06:33:35.445934   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.445942   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:35.445947   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:35.446009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:35.471144   54335 cri.go:89] found id: ""
	I1205 06:33:35.471157   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.471164   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:35.471169   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:35.471231   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:35.495788   54335 cri.go:89] found id: ""
	I1205 06:33:35.495802   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.495808   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:35.495814   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:35.495871   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:35.524598   54335 cri.go:89] found id: ""
	I1205 06:33:35.524621   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.524628   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:35.524633   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:35.524701   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:35.549143   54335 cri.go:89] found id: ""
	I1205 06:33:35.549227   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.549235   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:35.549242   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:35.549301   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:35.574312   54335 cri.go:89] found id: ""
	I1205 06:33:35.574325   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.574332   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:35.574340   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:35.574352   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:35.628890   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:35.628908   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:35.639919   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:35.639934   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:35.703264   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:35.695689   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.696286   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.697814   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.698255   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.699741   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:35.695689   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.696286   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.697814   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.698255   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.699741   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:35.703273   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:35.703286   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:35.766049   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:35.766067   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:38.297790   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:38.307702   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:38.307762   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:38.336326   54335 cri.go:89] found id: ""
	I1205 06:33:38.336340   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.336348   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:38.336353   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:38.336410   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:38.361342   54335 cri.go:89] found id: ""
	I1205 06:33:38.361356   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.361363   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:38.361371   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:38.361429   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:38.385186   54335 cri.go:89] found id: ""
	I1205 06:33:38.385200   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.385208   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:38.385213   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:38.385281   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:38.413803   54335 cri.go:89] found id: ""
	I1205 06:33:38.413816   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.413824   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:38.413829   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:38.413889   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:38.437536   54335 cri.go:89] found id: ""
	I1205 06:33:38.437572   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.437579   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:38.437585   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:38.437645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:38.462979   54335 cri.go:89] found id: ""
	I1205 06:33:38.462993   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.463000   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:38.463006   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:38.463069   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:38.488151   54335 cri.go:89] found id: ""
	I1205 06:33:38.488163   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.488170   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:38.488186   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:38.488196   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:38.544680   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:38.544696   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:38.555626   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:38.555641   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:38.618692   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:38.610579   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.611054   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.612674   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.613205   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.614695   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:38.610579   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.611054   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.612674   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.613205   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.614695   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:38.618701   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:38.618712   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:38.682609   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:38.682629   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:41.211631   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:41.221454   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:41.221514   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:41.245435   54335 cri.go:89] found id: ""
	I1205 06:33:41.245448   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.245455   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:41.245460   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:41.245516   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:41.268900   54335 cri.go:89] found id: ""
	I1205 06:33:41.268913   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.268920   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:41.268925   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:41.268980   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:41.297438   54335 cri.go:89] found id: ""
	I1205 06:33:41.297452   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.297460   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:41.297471   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:41.297536   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:41.325936   54335 cri.go:89] found id: ""
	I1205 06:33:41.325949   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.325956   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:41.325962   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:41.326036   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:41.354117   54335 cri.go:89] found id: ""
	I1205 06:33:41.354131   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.354138   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:41.354152   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:41.354209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:41.378638   54335 cri.go:89] found id: ""
	I1205 06:33:41.378651   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.378658   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:41.378664   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:41.378720   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:41.407136   54335 cri.go:89] found id: ""
	I1205 06:33:41.407150   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.407157   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:41.407164   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:41.407176   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:41.466362   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:41.466385   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:41.477977   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:41.477993   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:41.544052   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:41.534487   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.535316   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537008   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537377   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.540464   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:41.534487   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.535316   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537008   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537377   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.540464   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:41.544062   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:41.544073   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:41.606455   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:41.606472   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:44.134370   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:44.145440   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:44.145497   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:44.171961   54335 cri.go:89] found id: ""
	I1205 06:33:44.171975   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.171982   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:44.171987   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:44.172046   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:44.197113   54335 cri.go:89] found id: ""
	I1205 06:33:44.197127   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.197134   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:44.197138   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:44.197210   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:44.222364   54335 cri.go:89] found id: ""
	I1205 06:33:44.222378   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.222385   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:44.222390   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:44.222449   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:44.252062   54335 cri.go:89] found id: ""
	I1205 06:33:44.252075   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.252082   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:44.252087   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:44.252143   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:44.277356   54335 cri.go:89] found id: ""
	I1205 06:33:44.277370   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.277377   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:44.277382   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:44.277440   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:44.302126   54335 cri.go:89] found id: ""
	I1205 06:33:44.302139   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.302146   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:44.302151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:44.302214   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:44.326368   54335 cri.go:89] found id: ""
	I1205 06:33:44.326382   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.326389   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:44.326396   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:44.326406   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:44.382509   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:44.382526   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:44.393060   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:44.393075   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:44.454175   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:44.446190   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.447001   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.448578   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.449122   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.450707   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:44.446190   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.447001   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.448578   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.449122   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.450707   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:44.454185   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:44.454195   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:44.516835   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:44.516854   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:47.045086   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:47.055463   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:47.055525   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:47.080371   54335 cri.go:89] found id: ""
	I1205 06:33:47.080384   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.080391   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:47.080396   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:47.080458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:47.119514   54335 cri.go:89] found id: ""
	I1205 06:33:47.119527   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.119535   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:47.119539   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:47.119594   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:47.147444   54335 cri.go:89] found id: ""
	I1205 06:33:47.147457   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.147464   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:47.147469   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:47.147523   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:47.177712   54335 cri.go:89] found id: ""
	I1205 06:33:47.177726   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.177733   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:47.177738   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:47.177800   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:47.202097   54335 cri.go:89] found id: ""
	I1205 06:33:47.202110   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.202118   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:47.202124   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:47.202179   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:47.226333   54335 cri.go:89] found id: ""
	I1205 06:33:47.226347   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.226354   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:47.226359   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:47.226431   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:47.251986   54335 cri.go:89] found id: ""
	I1205 06:33:47.251999   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.252007   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:47.252014   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:47.252025   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:47.308015   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:47.308032   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:47.318805   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:47.318820   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:47.387458   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:47.379184   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.379724   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.381602   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.382334   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.383761   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:47.379184   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.379724   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.381602   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.382334   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.383761   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:47.387468   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:47.387478   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:47.448913   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:47.448930   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:49.981882   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:49.991852   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:49.991908   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:50.023200   54335 cri.go:89] found id: ""
	I1205 06:33:50.023221   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.023229   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:50.023235   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:50.023306   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:50.049577   54335 cri.go:89] found id: ""
	I1205 06:33:50.049591   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.049598   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:50.049604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:50.049665   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:50.078681   54335 cri.go:89] found id: ""
	I1205 06:33:50.078695   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.078703   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:50.078708   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:50.078769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:50.115465   54335 cri.go:89] found id: ""
	I1205 06:33:50.115478   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.115485   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:50.115496   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:50.115554   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:50.146578   54335 cri.go:89] found id: ""
	I1205 06:33:50.146591   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.146598   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:50.146603   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:50.146661   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:50.175515   54335 cri.go:89] found id: ""
	I1205 06:33:50.175528   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.175535   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:50.175541   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:50.175598   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:50.204420   54335 cri.go:89] found id: ""
	I1205 06:33:50.204433   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.204440   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:50.204449   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:50.204458   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:50.258843   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:50.258860   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:50.269324   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:50.269339   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:50.336484   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:50.328749   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.329537   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331160   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331458   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.332932   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:50.328749   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.329537   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331160   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331458   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.332932   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:50.336493   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:50.336515   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:50.399746   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:50.399764   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:52.927181   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:52.937445   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:52.937504   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:52.960934   54335 cri.go:89] found id: ""
	I1205 06:33:52.960947   54335 logs.go:282] 0 containers: []
	W1205 06:33:52.960954   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:52.960960   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:52.961022   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:52.986242   54335 cri.go:89] found id: ""
	I1205 06:33:52.986255   54335 logs.go:282] 0 containers: []
	W1205 06:33:52.986263   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:52.986268   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:52.986327   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:53.013571   54335 cri.go:89] found id: ""
	I1205 06:33:53.013585   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.013592   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:53.013597   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:53.013660   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:53.039257   54335 cri.go:89] found id: ""
	I1205 06:33:53.039271   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.039278   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:53.039284   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:53.039341   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:53.064162   54335 cri.go:89] found id: ""
	I1205 06:33:53.064174   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.064197   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:53.064202   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:53.064259   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:53.090118   54335 cri.go:89] found id: ""
	I1205 06:33:53.090131   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.090138   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:53.090143   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:53.090211   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:53.129452   54335 cri.go:89] found id: ""
	I1205 06:33:53.129464   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.129471   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:53.129478   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:53.129489   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:53.192396   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:53.192413   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:53.203770   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:53.203784   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:53.268406   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:53.260521   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.261282   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.262897   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.263184   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.264650   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:53.260521   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.261282   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.262897   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.263184   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.264650   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:53.268415   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:53.268427   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:53.331135   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:53.331156   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:55.857914   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:55.868426   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:55.868484   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:55.893813   54335 cri.go:89] found id: ""
	I1205 06:33:55.893826   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.893833   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:55.893838   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:55.893898   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:55.917807   54335 cri.go:89] found id: ""
	I1205 06:33:55.917820   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.917827   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:55.917832   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:55.917890   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:55.942437   54335 cri.go:89] found id: ""
	I1205 06:33:55.942450   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.942457   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:55.942462   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:55.942520   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:55.967048   54335 cri.go:89] found id: ""
	I1205 06:33:55.967061   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.967069   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:55.967075   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:55.967134   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:55.995796   54335 cri.go:89] found id: ""
	I1205 06:33:55.995809   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.995817   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:55.995822   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:55.995888   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:56.024165   54335 cri.go:89] found id: ""
	I1205 06:33:56.024179   54335 logs.go:282] 0 containers: []
	W1205 06:33:56.024186   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:56.024192   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:56.024255   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:56.050928   54335 cri.go:89] found id: ""
	I1205 06:33:56.050942   54335 logs.go:282] 0 containers: []
	W1205 06:33:56.050949   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:56.050957   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:56.050966   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:56.108175   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:56.108193   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:56.120521   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:56.120536   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:56.188922   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:56.181151   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.181776   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.183592   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.184091   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.185670   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:56.181151   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.181776   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.183592   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.184091   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.185670   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:56.188933   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:56.188944   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:56.250795   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:56.250813   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:58.783821   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:58.794017   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:58.794077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:58.818887   54335 cri.go:89] found id: ""
	I1205 06:33:58.818900   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.818907   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:58.818913   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:58.818970   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:58.843085   54335 cri.go:89] found id: ""
	I1205 06:33:58.843098   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.843105   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:58.843111   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:58.843173   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:58.873003   54335 cri.go:89] found id: ""
	I1205 06:33:58.873016   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.873024   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:58.873029   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:58.873087   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:58.898773   54335 cri.go:89] found id: ""
	I1205 06:33:58.898786   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.898793   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:58.898799   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:58.898857   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:58.923518   54335 cri.go:89] found id: ""
	I1205 06:33:58.923531   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.923538   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:58.923543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:58.923601   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:58.947602   54335 cri.go:89] found id: ""
	I1205 06:33:58.947615   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.947622   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:58.947627   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:58.947685   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:58.972459   54335 cri.go:89] found id: ""
	I1205 06:33:58.972473   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.972480   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:58.972488   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:58.972499   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:58.983301   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:58.983318   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:59.058445   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:59.050735   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.051328   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053133   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053776   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.054887   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:59.050735   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.051328   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053133   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053776   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.054887   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:59.058455   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:59.058468   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:59.121838   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:59.121859   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:59.153321   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:59.153345   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:01.714396   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:34:01.724655   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:34:01.724715   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:34:01.749246   54335 cri.go:89] found id: ""
	I1205 06:34:01.749259   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.749267   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:34:01.749272   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:34:01.749332   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:34:01.774227   54335 cri.go:89] found id: ""
	I1205 06:34:01.774240   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.774247   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:34:01.774253   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:34:01.774309   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:34:01.799574   54335 cri.go:89] found id: ""
	I1205 06:34:01.799588   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.799595   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:34:01.799600   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:34:01.799659   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:34:01.824994   54335 cri.go:89] found id: ""
	I1205 06:34:01.825008   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.825015   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:34:01.825020   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:34:01.825084   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:34:01.854353   54335 cri.go:89] found id: ""
	I1205 06:34:01.854367   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.854374   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:34:01.854380   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:34:01.854440   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:34:01.880365   54335 cri.go:89] found id: ""
	I1205 06:34:01.880379   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.880386   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:34:01.880392   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:34:01.880458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:34:01.906944   54335 cri.go:89] found id: ""
	I1205 06:34:01.906957   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.906964   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:34:01.906972   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:34:01.906982   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:34:01.938155   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:34:01.938171   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:01.992877   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:34:01.992895   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:34:02.007261   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:34:02.007278   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:34:02.080660   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:34:02.072024   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073018   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073709   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.075294   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.076108   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:34:02.072024   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073018   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073709   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.075294   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.076108   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:34:02.080669   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:34:02.080680   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:34:04.651581   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:34:04.661868   54335 kubeadm.go:602] duration metric: took 4m3.72973724s to restartPrimaryControlPlane
	W1205 06:34:04.661926   54335 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:34:04.661999   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:34:05.076526   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:34:05.090468   54335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:34:05.098831   54335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:34:05.098888   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:34:05.107168   54335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:34:05.107177   54335 kubeadm.go:158] found existing configuration files:
	
	I1205 06:34:05.107230   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:34:05.115256   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:34:05.115315   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:34:05.123163   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:34:05.130789   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:34:05.130850   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:34:05.138646   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:34:05.147024   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:34:05.147082   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:34:05.155378   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:34:05.163928   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:34:05.163985   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:34:05.171609   54335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:34:05.211033   54335 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:34:05.211109   54335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:34:05.279588   54335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:34:05.279653   54335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:34:05.279688   54335 kubeadm.go:319] OS: Linux
	I1205 06:34:05.279731   54335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:34:05.279778   54335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:34:05.279824   54335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:34:05.279876   54335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:34:05.279924   54335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:34:05.279971   54335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:34:05.280015   54335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:34:05.280062   54335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:34:05.280106   54335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:34:05.346565   54335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:34:05.346667   54335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:34:05.346756   54335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:34:05.352620   54335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:34:05.358148   54335 out.go:252]   - Generating certificates and keys ...
	I1205 06:34:05.358236   54335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:34:05.358307   54335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:34:05.358383   54335 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:34:05.358442   54335 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:34:05.358512   54335 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:34:05.358564   54335 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:34:05.358626   54335 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:34:05.358685   54335 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:34:05.358759   54335 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:34:05.358831   54335 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:34:05.358869   54335 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:34:05.358923   54335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:34:05.469895   54335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:34:05.573671   54335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:34:05.924291   54335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:34:06.081184   54335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:34:06.337744   54335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:34:06.338499   54335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:34:06.342999   54335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:34:06.346294   54335 out.go:252]   - Booting up control plane ...
	I1205 06:34:06.346403   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:34:06.346486   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:34:06.347115   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:34:06.367588   54335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:34:06.367869   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:34:06.375582   54335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:34:06.375840   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:34:06.375882   54335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:34:06.509639   54335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:34:06.509751   54335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:38:06.507887   54335 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288295s
	I1205 06:38:06.507910   54335 kubeadm.go:319] 
	I1205 06:38:06.508003   54335 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:38:06.508055   54335 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:38:06.508166   54335 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:38:06.508171   54335 kubeadm.go:319] 
	I1205 06:38:06.508290   54335 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:38:06.508326   54335 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:38:06.508363   54335 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:38:06.508367   54335 kubeadm.go:319] 
	I1205 06:38:06.511849   54335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:38:06.512286   54335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:38:06.512417   54335 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:38:06.512667   54335 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:38:06.512672   54335 kubeadm.go:319] 
	I1205 06:38:06.512746   54335 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 06:38:06.512894   54335 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288295s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:38:06.512983   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:38:06.919674   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:38:06.932797   54335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:38:06.932850   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:38:06.940628   54335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:38:06.940637   54335 kubeadm.go:158] found existing configuration files:
	
	I1205 06:38:06.940686   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:38:06.948311   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:38:06.948364   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:38:06.955656   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:38:06.963182   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:38:06.963234   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:38:06.970398   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:38:06.978024   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:38:06.978085   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:38:06.985044   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:38:06.992736   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:38:06.992788   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:38:07.000057   54335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:38:07.042188   54335 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:38:07.042482   54335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:38:07.116661   54335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:38:07.116719   54335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:38:07.116751   54335 kubeadm.go:319] OS: Linux
	I1205 06:38:07.116792   54335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:38:07.116836   54335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:38:07.116880   54335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:38:07.116923   54335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:38:07.116973   54335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:38:07.117018   54335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:38:07.117060   54335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:38:07.117104   54335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:38:07.117146   54335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:38:07.192664   54335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:38:07.192776   54335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:38:07.192871   54335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:38:07.201632   54335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:38:07.206982   54335 out.go:252]   - Generating certificates and keys ...
	I1205 06:38:07.207075   54335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:38:07.207145   54335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:38:07.207234   54335 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:38:07.207300   54335 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:38:07.207374   54335 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:38:07.207431   54335 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:38:07.207500   54335 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:38:07.207566   54335 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:38:07.207644   54335 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:38:07.207721   54335 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:38:07.207758   54335 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:38:07.207819   54335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:38:07.441757   54335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:38:07.738285   54335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:38:07.865941   54335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:38:08.382979   54335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:38:08.523706   54335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:38:08.524241   54335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:38:08.526890   54335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:38:08.530137   54335 out.go:252]   - Booting up control plane ...
	I1205 06:38:08.530240   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:38:08.530313   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:38:08.530379   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:38:08.552364   54335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:38:08.552467   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:38:08.559742   54335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:38:08.560021   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:38:08.560062   54335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:38:08.679099   54335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:38:08.679206   54335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:42:08.679850   54335 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117292s
	I1205 06:42:08.679871   54335 kubeadm.go:319] 
	I1205 06:42:08.679925   54335 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:42:08.679955   54335 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:42:08.680053   54335 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:42:08.680057   54335 kubeadm.go:319] 
	I1205 06:42:08.680155   54335 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:42:08.680184   54335 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:42:08.680212   54335 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:42:08.680215   54335 kubeadm.go:319] 
	I1205 06:42:08.683507   54335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:42:08.683930   54335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:42:08.684037   54335 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:42:08.684273   54335 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:42:08.684278   54335 kubeadm.go:319] 
	I1205 06:42:08.684346   54335 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:42:08.684393   54335 kubeadm.go:403] duration metric: took 12m7.791636767s to StartCluster
	I1205 06:42:08.684424   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:42:08.684483   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:42:08.708784   54335 cri.go:89] found id: ""
	I1205 06:42:08.708797   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.708804   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:42:08.708809   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:42:08.708865   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:42:08.733583   54335 cri.go:89] found id: ""
	I1205 06:42:08.733596   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.733603   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:42:08.733608   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:42:08.733670   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:42:08.762239   54335 cri.go:89] found id: ""
	I1205 06:42:08.762252   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.762259   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:42:08.762264   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:42:08.762320   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:42:08.785696   54335 cri.go:89] found id: ""
	I1205 06:42:08.785708   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.785715   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:42:08.785734   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:42:08.785790   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:42:08.810075   54335 cri.go:89] found id: ""
	I1205 06:42:08.810088   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.810096   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:42:08.810100   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:42:08.810158   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:42:08.834276   54335 cri.go:89] found id: ""
	I1205 06:42:08.834289   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.834296   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:42:08.834302   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:42:08.834358   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:42:08.858346   54335 cri.go:89] found id: ""
	I1205 06:42:08.858359   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.858366   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:42:08.858374   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:42:08.858383   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:42:08.913473   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:42:08.913490   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:42:08.924092   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:42:08.924108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:42:08.996046   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:42:08.988018   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.988849   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990388   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990679   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.992097   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:42:08.988018   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.988849   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990388   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990679   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.992097   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:42:08.996056   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:42:08.996066   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:42:09.060557   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:42:09.060575   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 06:42:09.093287   54335 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:42:09.093337   54335 out.go:285] * 
	W1205 06:42:09.093398   54335 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:42:09.093427   54335 out.go:285] * 
	W1205 06:42:09.096107   54335 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:42:09.099524   54335 out.go:203] 
	W1205 06:42:09.101056   54335 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:42:09.101108   54335 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:42:09.101134   54335 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:42:09.103029   54335 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145026672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145041688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145095498Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145105836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145128630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145145402Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145253415Z" level=info msg="runtime interface created"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145274027Z" level=info msg="created NRI interface"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145290905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145338700Z" level=info msg="Connect containerd service"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145722270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.146767640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165396800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165459980Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165493022Z" level=info msg="Start subscribing containerd event"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165540539Z" level=info msg="Start recovering state"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192890545Z" level=info msg="Start event monitor"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192942246Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192952470Z" level=info msg="Start streaming server"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192971760Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192981229Z" level=info msg="runtime interface starting up..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192987859Z" level=info msg="starting plugins..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192998526Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:29:59 functional-101526 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.194904270Z" level=info msg="containerd successfully booted in 0.069048s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:42:10.311214   21568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:10.311734   21568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:10.313470   21568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:10.314030   21568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:10.315711   21568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:42:10 up  1:24,  0 user,  load average: 0.30, 0.29, 0.40
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:42:07 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:42:07 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 05 06:42:07 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:07 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:07 functional-101526 kubelet[21372]: E1205 06:42:07.890263   21372 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:42:07 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:42:07 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:42:08 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 05 06:42:08 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:08 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:08 functional-101526 kubelet[21378]: E1205 06:42:08.634453   21378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:42:08 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:42:08 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:42:09 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 06:42:09 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:09 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:09 functional-101526 kubelet[21471]: E1205 06:42:09.432767   21471 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:42:09 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:42:09 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 06:42:10 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:10 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:10 functional-101526 kubelet[21521]: E1205 06:42:10.165309   21521 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (363.210349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-101526 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-101526 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (62.124819ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-101526 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (294.196369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-226068 image ls --format yaml --alsologtostderr                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format json --alsologtostderr                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls --format table --alsologtostderr                                                                                             │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ ssh     │ functional-226068 ssh pgrep buildkitd                                                                                                                   │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ image   │ functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr                                                  │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ image   │ functional-226068 image ls                                                                                                                              │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ delete  │ -p functional-226068                                                                                                                                    │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ start   │ -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ start   │ -p functional-101526 --alsologtostderr -v=8                                                                                                             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:23 UTC │                     │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add registry.k8s.io/pause:latest                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache add minikube-local-cache-test:functional-101526                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ functional-101526 cache delete minikube-local-cache-test:functional-101526                                                                              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl images                                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	│ cache   │ functional-101526 cache reload                                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ kubectl │ functional-101526 kubectl -- --context functional-101526 get pods                                                                                       │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	│ start   │ -p functional-101526 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:29:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:29:56.087419   54335 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:29:56.087558   54335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:29:56.087562   54335 out.go:374] Setting ErrFile to fd 2...
	I1205 06:29:56.087566   54335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:29:56.087860   54335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:29:56.088207   54335 out.go:368] Setting JSON to false
	I1205 06:29:56.088971   54335 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4343,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:29:56.089024   54335 start.go:143] virtualization:  
	I1205 06:29:56.093248   54335 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:29:56.096933   54335 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:29:56.097023   54335 notify.go:221] Checking for updates...
	I1205 06:29:56.100720   54335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:29:56.103681   54335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:29:56.106734   54335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:29:56.110260   54335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:29:56.113288   54335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:29:56.116882   54335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:29:56.116976   54335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:29:56.159923   54335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:29:56.160029   54335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:29:56.216532   54335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:29:56.206341969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:29:56.216625   54335 docker.go:319] overlay module found
	I1205 06:29:56.221471   54335 out.go:179] * Using the docker driver based on existing profile
	I1205 06:29:56.224343   54335 start.go:309] selected driver: docker
	I1205 06:29:56.224353   54335 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:29:56.224443   54335 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:29:56.224557   54335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:29:56.277319   54335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:29:56.268438767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:29:56.277800   54335 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:29:56.277821   54335 cni.go:84] Creating CNI manager for ""
	I1205 06:29:56.277884   54335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:29:56.278047   54335 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:29:56.282961   54335 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:29:56.285729   54335 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:29:56.288624   54335 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:29:56.291591   54335 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:29:56.291657   54335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:29:56.310650   54335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:29:56.310660   54335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:29:56.348534   54335 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:29:56.550462   54335 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:29:56.550637   54335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:29:56.550701   54335 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550781   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:29:56.550790   54335 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.262µs
	I1205 06:29:56.550802   54335 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:29:56.550812   54335 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550840   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:29:56.550844   54335 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 33.707µs
	I1205 06:29:56.550849   54335 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550857   54335 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550888   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:29:56.550892   54335 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.93µs
	I1205 06:29:56.550897   54335 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550906   54335 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550932   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:29:56.550937   54335 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:29:56.550939   54335 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.076µs
	I1205 06:29:56.550944   54335 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550952   54335 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550977   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:29:56.550965   54335 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550981   54335 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.187µs
	I1205 06:29:56.550986   54335 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550993   54335 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551016   54335 start.go:364] duration metric: took 28.546µs to acquireMachinesLock for "functional-101526"
	I1205 06:29:56.551022   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:29:56.551025   54335 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.24µs
	I1205 06:29:56.551035   54335 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:29:56.551034   54335 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:29:56.551039   54335 fix.go:54] fixHost starting: 
	I1205 06:29:56.551042   54335 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551065   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:29:56.551069   54335 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.16µs
	I1205 06:29:56.551073   54335 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:29:56.551081   54335 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551103   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:29:56.551106   54335 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 26.888µs
	I1205 06:29:56.551110   54335 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:29:56.551117   54335 cache.go:87] Successfully saved all images to host disk.
	I1205 06:29:56.551339   54335 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:29:56.568156   54335 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:29:56.568181   54335 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:29:56.571582   54335 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:29:56.571608   54335 machine.go:94] provisionDockerMachine start ...
	I1205 06:29:56.571688   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.588675   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.588995   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.589001   54335 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:29:56.736543   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:29:56.736557   54335 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:29:56.736615   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.754489   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.754781   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.754789   54335 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:29:56.915291   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:29:56.915355   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.933044   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.933393   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.933407   54335 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:29:57.085183   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:29:57.085199   54335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:29:57.085221   54335 ubuntu.go:190] setting up certificates
	I1205 06:29:57.085229   54335 provision.go:84] configureAuth start
	I1205 06:29:57.085299   54335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:29:57.101349   54335 provision.go:143] copyHostCerts
	I1205 06:29:57.101410   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:29:57.101421   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:29:57.101492   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:29:57.101592   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:29:57.101596   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:29:57.101621   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:29:57.101678   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:29:57.101680   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:29:57.101703   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:29:57.101750   54335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:29:57.543303   54335 provision.go:177] copyRemoteCerts
	I1205 06:29:57.543357   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:29:57.543409   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.560691   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:57.666006   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:29:57.683446   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:29:57.700645   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:29:57.717863   54335 provision.go:87] duration metric: took 632.597506ms to configureAuth
	I1205 06:29:57.717880   54335 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:29:57.718064   54335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:29:57.718070   54335 machine.go:97] duration metric: took 1.146457487s to provisionDockerMachine
	I1205 06:29:57.718076   54335 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:29:57.718086   54335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:29:57.718137   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:29:57.718174   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.735331   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:57.841496   54335 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:29:57.844702   54335 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:29:57.844721   54335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:29:57.844731   54335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:29:57.844783   54335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:29:57.844859   54335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:29:57.844934   54335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:29:57.844984   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:29:57.852337   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:29:57.869668   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:29:57.887019   54335 start.go:296] duration metric: took 168.92936ms for postStartSetup
	I1205 06:29:57.887102   54335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:29:57.887149   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.903894   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.011756   54335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:29:58.016900   54335 fix.go:56] duration metric: took 1.465853892s for fixHost
	I1205 06:29:58.016919   54335 start.go:83] releasing machines lock for "functional-101526", held for 1.465896107s
	I1205 06:29:58.016988   54335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:29:58.035591   54335 ssh_runner.go:195] Run: cat /version.json
	I1205 06:29:58.035642   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:58.035909   54335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:29:58.035957   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:58.053529   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.058886   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.156784   54335 ssh_runner.go:195] Run: systemctl --version
	I1205 06:29:58.245777   54335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:29:58.249918   54335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:29:58.249974   54335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:29:58.257133   54335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:29:58.257146   54335 start.go:496] detecting cgroup driver to use...
	I1205 06:29:58.257190   54335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:29:58.257233   54335 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:29:58.273979   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:29:58.288748   54335 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:29:58.288814   54335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:29:58.305248   54335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:29:58.319216   54335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:29:58.440307   54335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:29:58.559446   54335 docker.go:234] disabling docker service ...
	I1205 06:29:58.559504   54335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:29:58.574399   54335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:29:58.587407   54335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:29:58.701676   54335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:29:58.808689   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:29:58.821276   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:29:58.836401   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:29:58.846421   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:29:58.855275   54335 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:29:58.855341   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:29:58.864125   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:29:58.872649   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:29:58.881354   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:29:58.890354   54335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:29:58.898337   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:29:58.907106   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:29:58.915882   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:29:58.924414   54335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:29:58.931809   54335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:29:58.939114   54335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:29:59.065680   54335 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:29:59.195981   54335 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:29:59.196040   54335 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:29:59.199987   54335 start.go:564] Will wait 60s for crictl version
	I1205 06:29:59.200039   54335 ssh_runner.go:195] Run: which crictl
	I1205 06:29:59.203560   54335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:29:59.235649   54335 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:29:59.235710   54335 ssh_runner.go:195] Run: containerd --version
	I1205 06:29:59.255405   54335 ssh_runner.go:195] Run: containerd --version
	I1205 06:29:59.283346   54335 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:29:59.286262   54335 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:29:59.301845   54335 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:29:59.308610   54335 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:29:59.311441   54335 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:29:59.311553   54335 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:29:59.311627   54335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:29:59.336067   54335 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:29:59.336079   54335 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:29:59.336085   54335 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:29:59.336175   54335 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:29:59.336232   54335 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:29:59.363378   54335 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:29:59.363395   54335 cni.go:84] Creating CNI manager for ""
	I1205 06:29:59.363403   54335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:29:59.363415   54335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:29:59.363436   54335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:29:59.363559   54335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:29:59.363624   54335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:29:59.371046   54335 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:29:59.371108   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:29:59.378354   54335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:29:59.390503   54335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:29:59.402745   54335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1205 06:29:59.414910   54335 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:29:59.418646   54335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:29:59.529578   54335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:29:59.846402   54335 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:29:59.846413   54335 certs.go:195] generating shared ca certs ...
	I1205 06:29:59.846426   54335 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:29:59.846569   54335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:29:59.846610   54335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:29:59.846616   54335 certs.go:257] generating profile certs ...
	I1205 06:29:59.846728   54335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:29:59.846770   54335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:29:59.846811   54335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:29:59.846921   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:29:59.846956   54335 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:29:59.846962   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:29:59.846989   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:29:59.847014   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:29:59.847036   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:29:59.847085   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:29:59.847736   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:29:59.867939   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:29:59.888562   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:29:59.907283   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:29:59.927879   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:29:59.944224   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:29:59.960459   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:29:59.979078   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:29:59.996293   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:30:00.066962   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:30:00.118991   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:30:00.185989   54335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:30:00.235503   54335 ssh_runner.go:195] Run: openssl version
	I1205 06:30:00.255104   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.270140   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:30:00.290181   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.295705   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.295771   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.399762   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:30:00.412238   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.433387   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:30:00.449934   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.455249   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.455319   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.517764   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:30:00.530824   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.546605   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:30:00.555560   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.561005   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.561068   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.611790   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:30:00.623580   54335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:30:00.628736   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:30:00.674439   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:30:00.717432   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:30:00.760669   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:30:00.802949   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:30:00.845730   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:30:00.892769   54335 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:30:00.892871   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:30:00.892957   54335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:30:00.923464   54335 cri.go:89] found id: ""
	I1205 06:30:00.923530   54335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:30:00.932111   54335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:30:00.932122   54335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:30:00.932182   54335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:30:00.940210   54335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:00.940808   54335 kubeconfig.go:125] found "functional-101526" server: "https://192.168.49.2:8441"
	I1205 06:30:00.942221   54335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:30:00.951085   54335 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:15:26.552544518 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:29:59.409281720 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:30:00.951105   54335 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:30:00.951116   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1205 06:30:00.951177   54335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:30:00.983535   54335 cri.go:89] found id: ""
	I1205 06:30:00.983600   54335 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:30:00.999793   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:30:01.011193   54335 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  5 06:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5628 Dec  5 06:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  5 06:19 /etc/kubernetes/scheduler.conf
	
	I1205 06:30:01.011277   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:30:01.020421   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:30:01.029014   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.029083   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:30:01.037495   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:30:01.045879   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.045943   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:30:01.054299   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:30:01.063067   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.063128   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:30:01.071319   54335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:30:01.080035   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:01.126871   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.550689   54335 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.423791138s)
	I1205 06:30:02.550750   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.758304   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.826924   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.872904   54335 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:30:02.872975   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:03.373516   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:03.873269   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:04.373873   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:04.873262   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:05.374099   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:05.873790   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:06.374013   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:06.873783   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:07.373319   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:07.874006   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:08.374019   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:08.873288   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:09.373772   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:09.873842   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:10.373300   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:10.874107   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:11.373177   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:11.873355   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:12.373736   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:12.873308   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:13.374049   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:13.873112   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:14.374044   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:14.873826   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:15.373350   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:15.873570   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:16.373205   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:16.873133   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:17.373949   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:17.873343   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:18.373376   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:18.873437   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:19.373102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:19.874076   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:20.373694   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:20.873676   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:21.373293   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:21.873915   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:22.373279   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:22.873197   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:23.373182   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:23.873041   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:24.373194   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:24.873913   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:25.373334   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:25.874011   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:26.373620   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:26.873898   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:27.373174   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:27.874034   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:28.373282   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:28.873430   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:29.374096   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:29.873271   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:30.373863   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:30.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:31.373041   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:31.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:32.373311   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:32.873944   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:33.373660   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:33.873399   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:34.373269   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:34.873154   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:35.374056   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:35.873925   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:36.373314   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:36.873816   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:37.373079   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:37.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:38.373973   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:38.873278   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:39.373892   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:39.873395   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:40.373274   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:40.874009   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:41.374054   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:41.873330   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:42.373986   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:42.873130   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:43.373582   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:43.873189   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:44.373894   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:44.873102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:45.373202   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:45.873349   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:46.373273   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:46.873147   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:47.374102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:47.873856   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:48.374059   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:48.873728   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:49.373337   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:49.873152   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:50.373886   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:50.873110   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:51.373740   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:51.873807   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:52.373287   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:52.873175   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:53.373983   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:53.873898   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:54.374080   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:54.873113   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:55.373274   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:55.874004   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:56.373964   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:56.873273   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:57.373188   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:57.873857   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:58.373297   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:58.873189   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:59.373797   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:59.874078   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:00.374118   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:00.873073   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:01.373094   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:01.873990   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:02.373960   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:02.873246   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:02.873355   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:02.899119   54335 cri.go:89] found id: ""
	I1205 06:31:02.899133   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.899140   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:02.899145   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:02.899201   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:02.926015   54335 cri.go:89] found id: ""
	I1205 06:31:02.926028   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.926036   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:02.926041   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:02.926100   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:02.950775   54335 cri.go:89] found id: ""
	I1205 06:31:02.950788   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.950795   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:02.950800   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:02.950859   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:02.978268   54335 cri.go:89] found id: ""
	I1205 06:31:02.978282   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.978289   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:02.978294   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:02.978352   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:03.015482   54335 cri.go:89] found id: ""
	I1205 06:31:03.015497   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.015506   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:03.015511   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:03.015575   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:03.041353   54335 cri.go:89] found id: ""
	I1205 06:31:03.041366   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.041373   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:03.041379   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:03.041463   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:03.066457   54335 cri.go:89] found id: ""
	I1205 06:31:03.066472   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.066479   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:03.066487   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:03.066502   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:03.121069   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:03.121087   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:03.131794   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:03.131809   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:03.195836   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:03.188092   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.188541   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190139   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190560   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.191959   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:03.188092   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.188541   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190139   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190560   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.191959   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:03.195847   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:03.195859   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:03.258177   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:03.258195   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:05.785947   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:05.795932   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:05.795992   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:05.822996   54335 cri.go:89] found id: ""
	I1205 06:31:05.823010   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.823017   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:05.823022   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:05.823079   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:05.851647   54335 cri.go:89] found id: ""
	I1205 06:31:05.851660   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.851667   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:05.851671   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:05.851728   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:05.888840   54335 cri.go:89] found id: ""
	I1205 06:31:05.888853   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.888860   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:05.888865   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:05.888923   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:05.916749   54335 cri.go:89] found id: ""
	I1205 06:31:05.916763   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.916771   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:05.916776   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:05.916838   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:05.941885   54335 cri.go:89] found id: ""
	I1205 06:31:05.941898   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.941905   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:05.941910   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:05.941970   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:05.967174   54335 cri.go:89] found id: ""
	I1205 06:31:05.967188   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.967195   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:05.967202   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:05.967259   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:05.991608   54335 cri.go:89] found id: ""
	I1205 06:31:05.991622   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.991629   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:05.991637   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:05.991647   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:06.048885   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:06.048907   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:06.060386   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:06.060403   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:06.139830   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:06.132213   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.132764   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134526   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134986   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.136558   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:06.132213   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.132764   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134526   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134986   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.136558   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:06.139840   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:06.139853   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:06.202288   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:06.202307   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:08.730029   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:08.740211   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:08.740272   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:08.763977   54335 cri.go:89] found id: ""
	I1205 06:31:08.763991   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.763998   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:08.764004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:08.764064   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:08.788621   54335 cri.go:89] found id: ""
	I1205 06:31:08.788635   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.788642   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:08.788647   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:08.788702   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:08.813441   54335 cri.go:89] found id: ""
	I1205 06:31:08.813454   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.813461   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:08.813466   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:08.813522   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:08.837930   54335 cri.go:89] found id: ""
	I1205 06:31:08.837944   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.837951   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:08.837956   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:08.838014   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:08.865898   54335 cri.go:89] found id: ""
	I1205 06:31:08.865911   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.865918   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:08.865923   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:08.865985   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:08.893385   54335 cri.go:89] found id: ""
	I1205 06:31:08.893410   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.893417   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:08.893422   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:08.893488   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:08.922394   54335 cri.go:89] found id: ""
	I1205 06:31:08.922407   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.922414   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:08.922422   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:08.922432   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:08.977895   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:08.977913   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:08.989011   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:08.989025   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:09.057444   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:09.048642   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.049814   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.051664   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.052030   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.053581   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:09.048642   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.049814   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.051664   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.052030   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.053581   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:09.057456   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:09.057471   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:09.119855   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:09.119875   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:11.657869   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:11.668122   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:11.668185   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:11.692170   54335 cri.go:89] found id: ""
	I1205 06:31:11.692183   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.692190   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:11.692195   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:11.692253   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:11.716930   54335 cri.go:89] found id: ""
	I1205 06:31:11.716945   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.716951   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:11.716962   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:11.717031   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:11.741795   54335 cri.go:89] found id: ""
	I1205 06:31:11.741808   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.741815   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:11.741820   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:11.741881   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:11.766411   54335 cri.go:89] found id: ""
	I1205 06:31:11.766425   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.766431   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:11.766437   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:11.766495   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:11.791195   54335 cri.go:89] found id: ""
	I1205 06:31:11.791209   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.791216   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:11.791221   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:11.791280   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:11.819219   54335 cri.go:89] found id: ""
	I1205 06:31:11.819233   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.819245   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:11.819251   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:11.819312   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:11.851464   54335 cri.go:89] found id: ""
	I1205 06:31:11.851478   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.851491   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:11.851498   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:11.851508   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:11.931606   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:11.931625   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:11.960389   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:11.960407   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:12.021080   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:12.021102   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:12.032273   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:12.032290   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:12.097324   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:12.088793   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.089496   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091075   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091390   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.093729   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:12.088793   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.089496   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091075   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091390   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.093729   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:14.597581   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:14.607724   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:14.607782   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:14.632907   54335 cri.go:89] found id: ""
	I1205 06:31:14.632921   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.632928   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:14.632933   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:14.632989   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:14.657884   54335 cri.go:89] found id: ""
	I1205 06:31:14.657898   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.657905   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:14.657910   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:14.657965   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:14.681364   54335 cri.go:89] found id: ""
	I1205 06:31:14.681377   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.681384   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:14.681389   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:14.681462   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:14.709552   54335 cri.go:89] found id: ""
	I1205 06:31:14.709566   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.709573   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:14.709578   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:14.709642   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:14.733105   54335 cri.go:89] found id: ""
	I1205 06:31:14.733118   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.733125   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:14.733130   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:14.733217   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:14.759861   54335 cri.go:89] found id: ""
	I1205 06:31:14.759874   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.759881   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:14.759887   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:14.759943   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:14.785666   54335 cri.go:89] found id: ""
	I1205 06:31:14.785679   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.785686   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:14.785693   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:14.785706   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:14.854767   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:14.841994   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.842592   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844142   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844598   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.846116   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:14.841994   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.842592   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844142   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844598   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.846116   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:14.854785   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:14.854795   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:14.922701   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:14.922719   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:14.953207   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:14.953223   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:15.010462   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:15.010484   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:17.529572   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:17.539788   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:17.539847   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:17.563677   54335 cri.go:89] found id: ""
	I1205 06:31:17.563691   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.563698   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:17.563703   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:17.563774   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:17.593628   54335 cri.go:89] found id: ""
	I1205 06:31:17.593642   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.593649   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:17.593654   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:17.593720   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:17.619071   54335 cri.go:89] found id: ""
	I1205 06:31:17.619084   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.619092   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:17.619097   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:17.619153   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:17.642944   54335 cri.go:89] found id: ""
	I1205 06:31:17.642958   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.642964   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:17.642970   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:17.643037   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:17.667755   54335 cri.go:89] found id: ""
	I1205 06:31:17.667768   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.667775   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:17.667780   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:17.667836   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:17.691060   54335 cri.go:89] found id: ""
	I1205 06:31:17.691073   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.691080   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:17.691085   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:17.691152   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:17.714527   54335 cri.go:89] found id: ""
	I1205 06:31:17.714540   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.714547   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:17.714554   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:17.714564   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:17.777347   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:17.777365   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:17.804848   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:17.804862   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:17.866054   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:17.866072   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:17.877290   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:17.877305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:17.944157   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:17.936780   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.937336   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939068   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939357   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.940820   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:17.936780   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.937336   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939068   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939357   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.940820   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:20.445814   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:20.455929   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:20.456007   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:20.480265   54335 cri.go:89] found id: ""
	I1205 06:31:20.480280   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.480287   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:20.480294   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:20.480371   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:20.504045   54335 cri.go:89] found id: ""
	I1205 06:31:20.504059   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.504065   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:20.504070   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:20.504128   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:20.528811   54335 cri.go:89] found id: ""
	I1205 06:31:20.528824   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.528831   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:20.528836   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:20.528893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:20.553249   54335 cri.go:89] found id: ""
	I1205 06:31:20.553272   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.553279   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:20.553284   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:20.553358   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:20.577735   54335 cri.go:89] found id: ""
	I1205 06:31:20.577767   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.577775   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:20.577780   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:20.577839   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:20.603821   54335 cri.go:89] found id: ""
	I1205 06:31:20.603835   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.603852   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:20.603858   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:20.603955   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:20.632954   54335 cri.go:89] found id: ""
	I1205 06:31:20.632985   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.632992   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:20.633000   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:20.633010   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:20.688822   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:20.688840   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:20.700167   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:20.700183   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:20.766199   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:20.757515   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.758089   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760039   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760823   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.762597   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:20.757515   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.758089   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760039   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760823   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.762597   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:20.766209   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:20.766219   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:20.829413   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:20.829439   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:23.369036   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:23.379250   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:23.379308   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:23.407254   54335 cri.go:89] found id: ""
	I1205 06:31:23.407268   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.407275   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:23.407280   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:23.407335   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:23.431989   54335 cri.go:89] found id: ""
	I1205 06:31:23.432002   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.432009   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:23.432014   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:23.432079   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:23.467269   54335 cri.go:89] found id: ""
	I1205 06:31:23.467287   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.467293   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:23.467299   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:23.467362   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:23.490943   54335 cri.go:89] found id: ""
	I1205 06:31:23.490956   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.490962   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:23.490968   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:23.491025   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:23.519217   54335 cri.go:89] found id: ""
	I1205 06:31:23.519232   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.519239   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:23.519244   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:23.519306   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:23.543863   54335 cri.go:89] found id: ""
	I1205 06:31:23.543877   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.543883   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:23.543888   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:23.543956   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:23.567865   54335 cri.go:89] found id: ""
	I1205 06:31:23.567878   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.567897   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:23.567905   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:23.567914   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:23.632509   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:23.632529   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:23.662290   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:23.662305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:23.719254   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:23.719272   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:23.730331   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:23.730346   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:23.792133   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:23.784315   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.784953   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.786670   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.787328   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.788816   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:23.784315   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.784953   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.786670   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.787328   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.788816   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:26.293128   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:26.304108   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:26.304168   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:26.331011   54335 cri.go:89] found id: ""
	I1205 06:31:26.331024   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.331031   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:26.331040   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:26.331097   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:26.358547   54335 cri.go:89] found id: ""
	I1205 06:31:26.358562   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.358569   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:26.358573   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:26.358630   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:26.387125   54335 cri.go:89] found id: ""
	I1205 06:31:26.387139   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.387146   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:26.387151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:26.387210   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:26.412329   54335 cri.go:89] found id: ""
	I1205 06:31:26.412343   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.412350   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:26.412355   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:26.412433   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:26.437117   54335 cri.go:89] found id: ""
	I1205 06:31:26.437130   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.437138   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:26.437142   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:26.437253   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:26.465767   54335 cri.go:89] found id: ""
	I1205 06:31:26.465779   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.465787   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:26.465792   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:26.465855   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:26.489618   54335 cri.go:89] found id: ""
	I1205 06:31:26.489636   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.489643   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:26.489651   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:26.489661   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:26.516285   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:26.516307   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:26.571623   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:26.571639   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:26.582532   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:26.582547   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:26.648629   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:26.640184   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.640930   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.642740   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.643413   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.644996   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:26.640184   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.640930   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.642740   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.643413   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.644996   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:26.648640   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:26.648652   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:29.213295   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:29.223226   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:29.223291   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:29.248501   54335 cri.go:89] found id: ""
	I1205 06:31:29.248514   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.248521   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:29.248526   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:29.248585   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:29.273551   54335 cri.go:89] found id: ""
	I1205 06:31:29.273564   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.273571   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:29.273576   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:29.273633   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:29.297959   54335 cri.go:89] found id: ""
	I1205 06:31:29.297972   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.297979   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:29.297985   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:29.298043   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:29.322784   54335 cri.go:89] found id: ""
	I1205 06:31:29.322798   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.322809   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:29.322814   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:29.322870   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:29.351067   54335 cri.go:89] found id: ""
	I1205 06:31:29.351080   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.351087   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:29.351092   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:29.351163   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:29.378768   54335 cri.go:89] found id: ""
	I1205 06:31:29.378782   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.378789   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:29.378794   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:29.378854   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:29.403528   54335 cri.go:89] found id: ""
	I1205 06:31:29.403542   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.403549   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:29.403556   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:29.403567   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:29.471248   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:29.463937   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.464521   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466184   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466622   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.467929   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:29.463937   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.464521   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466184   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466622   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.467929   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:29.471259   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:29.471269   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:29.533062   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:29.533080   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:29.564293   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:29.564323   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:29.619083   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:29.619101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:32.130510   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:32.143539   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:32.143642   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:32.171414   54335 cri.go:89] found id: ""
	I1205 06:31:32.171428   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.171436   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:32.171441   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:32.171499   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:32.196112   54335 cri.go:89] found id: ""
	I1205 06:31:32.196125   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.196132   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:32.196137   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:32.196195   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:32.223236   54335 cri.go:89] found id: ""
	I1205 06:31:32.223250   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.223257   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:32.223261   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:32.223317   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:32.247226   54335 cri.go:89] found id: ""
	I1205 06:31:32.247240   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.247247   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:32.247252   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:32.247308   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:32.275892   54335 cri.go:89] found id: ""
	I1205 06:31:32.275905   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.275912   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:32.275918   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:32.275975   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:32.304746   54335 cri.go:89] found id: ""
	I1205 06:31:32.304759   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.304767   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:32.304772   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:32.304831   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:32.329065   54335 cri.go:89] found id: ""
	I1205 06:31:32.329078   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.329085   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:32.329092   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:32.329101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:32.384331   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:32.384349   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:32.395108   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:32.395123   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:32.457079   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:32.449726   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.450348   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.451857   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.452270   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.453749   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:32.449726   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.450348   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.451857   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.452270   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.453749   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:32.457097   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:32.457108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:32.520612   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:32.520631   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:35.049835   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:35.059785   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:35.059850   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:35.084594   54335 cri.go:89] found id: ""
	I1205 06:31:35.084610   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.084617   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:35.084624   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:35.084682   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:35.119519   54335 cri.go:89] found id: ""
	I1205 06:31:35.119533   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.119553   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:35.119559   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:35.119625   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:35.146284   54335 cri.go:89] found id: ""
	I1205 06:31:35.146298   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.146305   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:35.146310   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:35.146370   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:35.174570   54335 cri.go:89] found id: ""
	I1205 06:31:35.174583   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.174590   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:35.174596   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:35.174653   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:35.198347   54335 cri.go:89] found id: ""
	I1205 06:31:35.198361   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.198368   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:35.198374   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:35.198430   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:35.226196   54335 cri.go:89] found id: ""
	I1205 06:31:35.226210   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.226216   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:35.226222   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:35.226281   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:35.250876   54335 cri.go:89] found id: ""
	I1205 06:31:35.250889   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.250897   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:35.250904   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:35.250913   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:35.304930   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:35.304948   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:35.315954   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:35.315970   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:35.377099   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:35.369290   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.369826   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371500   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371964   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.373533   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:35.369290   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.369826   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371500   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371964   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.373533   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:35.377109   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:35.377120   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:35.437784   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:35.437801   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:37.968228   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:37.977892   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:37.977968   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:38.010142   54335 cri.go:89] found id: ""
	I1205 06:31:38.010158   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.010173   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:38.010180   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:38.010249   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:38.048020   54335 cri.go:89] found id: ""
	I1205 06:31:38.048034   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.048041   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:38.048047   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:38.048112   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:38.077977   54335 cri.go:89] found id: ""
	I1205 06:31:38.077991   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.077999   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:38.078004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:38.078068   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:38.115520   54335 cri.go:89] found id: ""
	I1205 06:31:38.115534   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.115541   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:38.115546   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:38.115618   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:38.141580   54335 cri.go:89] found id: ""
	I1205 06:31:38.141593   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.141613   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:38.141618   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:38.141673   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:38.167473   54335 cri.go:89] found id: ""
	I1205 06:31:38.167487   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.167493   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:38.167499   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:38.167565   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:38.190856   54335 cri.go:89] found id: ""
	I1205 06:31:38.190869   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.190876   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:38.190884   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:38.190894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:38.245488   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:38.245505   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:38.255819   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:38.255834   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:38.319935   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:38.311836   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.312540   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314137   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314745   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.316388   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:38.311836   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.312540   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314137   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314745   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.316388   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:38.319952   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:38.319963   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:38.381733   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:38.381750   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:40.911397   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:40.921257   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:40.921321   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:40.947604   54335 cri.go:89] found id: ""
	I1205 06:31:40.947618   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.947625   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:40.947630   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:40.947694   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:40.973136   54335 cri.go:89] found id: ""
	I1205 06:31:40.973148   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.973186   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:40.973191   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:40.973256   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:40.996412   54335 cri.go:89] found id: ""
	I1205 06:31:40.996425   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.996432   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:40.996437   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:40.996497   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:41.024001   54335 cri.go:89] found id: ""
	I1205 06:31:41.024015   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.024022   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:41.024028   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:41.024086   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:41.051496   54335 cri.go:89] found id: ""
	I1205 06:31:41.051510   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.051517   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:41.051522   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:41.051582   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:41.080451   54335 cri.go:89] found id: ""
	I1205 06:31:41.080464   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.080471   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:41.080476   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:41.080533   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:41.117388   54335 cri.go:89] found id: ""
	I1205 06:31:41.117401   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.117409   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:41.117416   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:41.117426   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:41.182349   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:41.182368   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:41.193093   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:41.193108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:41.254159   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:41.246911   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.247523   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249025   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249503   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.250928   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:41.246911   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.247523   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249025   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249503   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.250928   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:41.254170   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:41.254181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:41.321082   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:41.321101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:43.851964   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:43.862187   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:43.862247   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:43.886923   54335 cri.go:89] found id: ""
	I1205 06:31:43.886937   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.886944   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:43.886950   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:43.887009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:43.912496   54335 cri.go:89] found id: ""
	I1205 06:31:43.912509   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.912516   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:43.912521   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:43.912579   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:43.936914   54335 cri.go:89] found id: ""
	I1205 06:31:43.936928   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.936938   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:43.936943   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:43.937000   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:43.961282   54335 cri.go:89] found id: ""
	I1205 06:31:43.961297   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.961304   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:43.961314   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:43.961378   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:43.988380   54335 cri.go:89] found id: ""
	I1205 06:31:43.988394   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.988401   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:43.988406   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:43.988464   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:44.020415   54335 cri.go:89] found id: ""
	I1205 06:31:44.020429   54335 logs.go:282] 0 containers: []
	W1205 06:31:44.020437   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:44.020442   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:44.020501   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:44.045852   54335 cri.go:89] found id: ""
	I1205 06:31:44.045866   54335 logs.go:282] 0 containers: []
	W1205 06:31:44.045873   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:44.045881   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:44.045894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:44.056666   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:44.056681   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:44.135868   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:44.126530   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.127194   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129371   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129954   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.131639   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:44.126530   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.127194   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129371   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129954   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.131639   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:44.135879   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:44.135890   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:44.204481   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:44.204500   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:44.232917   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:44.232935   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:46.789779   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:46.799818   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:46.799875   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:46.823971   54335 cri.go:89] found id: ""
	I1205 06:31:46.823985   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.823992   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:46.823998   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:46.824061   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:46.848342   54335 cri.go:89] found id: ""
	I1205 06:31:46.848356   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.848363   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:46.848368   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:46.848425   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:46.873786   54335 cri.go:89] found id: ""
	I1205 06:31:46.873800   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.873807   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:46.873812   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:46.873873   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:46.903465   54335 cri.go:89] found id: ""
	I1205 06:31:46.903479   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.903487   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:46.903492   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:46.903549   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:46.932432   54335 cri.go:89] found id: ""
	I1205 06:31:46.932446   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.932453   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:46.932458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:46.932518   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:46.957671   54335 cri.go:89] found id: ""
	I1205 06:31:46.957684   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.957692   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:46.957697   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:46.957760   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:46.983050   54335 cri.go:89] found id: ""
	I1205 06:31:46.983063   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.983077   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:46.983085   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:46.983095   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:47.042088   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:47.042105   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:47.053482   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:47.053498   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:47.131108   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:47.122748   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.123420   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125206   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125739   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.127319   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:47.122748   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.123420   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125206   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125739   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.127319   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:47.131117   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:47.131128   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:47.204434   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:47.204452   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:49.735640   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:49.745807   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:49.745868   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:49.770984   54335 cri.go:89] found id: ""
	I1205 06:31:49.770997   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.771004   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:49.771009   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:49.771072   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:49.795524   54335 cri.go:89] found id: ""
	I1205 06:31:49.795538   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.795545   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:49.795550   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:49.795605   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:49.820126   54335 cri.go:89] found id: ""
	I1205 06:31:49.820140   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.820147   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:49.820152   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:49.820209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:49.844379   54335 cri.go:89] found id: ""
	I1205 06:31:49.844392   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.844401   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:49.844408   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:49.844465   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:49.871132   54335 cri.go:89] found id: ""
	I1205 06:31:49.871144   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.871152   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:49.871157   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:49.871214   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:49.894867   54335 cri.go:89] found id: ""
	I1205 06:31:49.894880   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.894887   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:49.894893   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:49.894949   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:49.920144   54335 cri.go:89] found id: ""
	I1205 06:31:49.920157   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.920164   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:49.920171   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:49.920181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:49.979573   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:49.979595   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:49.990405   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:49.990420   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:50.061353   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:50.052917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.053917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.055577   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.056112   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.057715   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:50.052917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.053917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.055577   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.056112   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.057715   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:50.061364   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:50.061376   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:50.139097   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:50.139131   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:52.678459   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:52.688604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:52.688663   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:52.712686   54335 cri.go:89] found id: ""
	I1205 06:31:52.712700   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.712707   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:52.712712   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:52.712774   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:52.746954   54335 cri.go:89] found id: ""
	I1205 06:31:52.746968   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.746975   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:52.746980   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:52.747039   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:52.771325   54335 cri.go:89] found id: ""
	I1205 06:31:52.771338   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.771345   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:52.771350   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:52.771406   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:52.795882   54335 cri.go:89] found id: ""
	I1205 06:31:52.795896   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.795902   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:52.795908   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:52.795965   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:52.820064   54335 cri.go:89] found id: ""
	I1205 06:31:52.820079   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.820085   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:52.820090   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:52.820150   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:52.848297   54335 cri.go:89] found id: ""
	I1205 06:31:52.848311   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.848317   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:52.848323   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:52.848381   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:52.876041   54335 cri.go:89] found id: ""
	I1205 06:31:52.876055   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.876062   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:52.876069   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:52.876079   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:52.931790   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:52.931811   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:52.942929   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:52.942944   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:53.007664   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:52.997863   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:52.998579   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.000280   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.001013   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.002974   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:52.997863   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:52.998579   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.000280   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.001013   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.002974   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:53.007675   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:53.007686   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:53.073695   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:53.073712   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:55.610763   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:55.620883   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:55.620945   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:55.645677   54335 cri.go:89] found id: ""
	I1205 06:31:55.645691   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.645698   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:55.645703   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:55.645763   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:55.670962   54335 cri.go:89] found id: ""
	I1205 06:31:55.670975   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.670982   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:55.670987   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:55.671045   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:55.695354   54335 cri.go:89] found id: ""
	I1205 06:31:55.695367   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.695374   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:55.695379   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:55.695447   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:55.719264   54335 cri.go:89] found id: ""
	I1205 06:31:55.719277   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.719284   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:55.719290   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:55.719347   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:55.742928   54335 cri.go:89] found id: ""
	I1205 06:31:55.742941   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.742948   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:55.742954   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:55.743013   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:55.766643   54335 cri.go:89] found id: ""
	I1205 06:31:55.766657   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.766664   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:55.766672   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:55.766729   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:55.789985   54335 cri.go:89] found id: ""
	I1205 06:31:55.789999   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.790005   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:55.790051   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:55.790062   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:55.817984   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:55.818000   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:55.874068   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:55.874085   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:55.885873   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:55.885888   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:55.950375   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:55.941637   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.942508   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.944636   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.945456   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.946344   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:55.941637   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.942508   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.944636   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.945456   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.946344   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:55.950385   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:55.950396   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:58.513319   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:58.523187   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:58.523244   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:58.546403   54335 cri.go:89] found id: ""
	I1205 06:31:58.546416   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.546423   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:58.546429   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:58.546486   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:58.570005   54335 cri.go:89] found id: ""
	I1205 06:31:58.570019   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.570035   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:58.570040   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:58.570098   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:58.594200   54335 cri.go:89] found id: ""
	I1205 06:31:58.594214   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.594220   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:58.594225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:58.594284   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:58.618421   54335 cri.go:89] found id: ""
	I1205 06:31:58.618434   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.618440   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:58.618445   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:58.618499   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:58.642656   54335 cri.go:89] found id: ""
	I1205 06:31:58.642669   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.642676   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:58.642682   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:58.642742   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:58.667838   54335 cri.go:89] found id: ""
	I1205 06:31:58.667850   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.667858   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:58.667863   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:58.667933   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:58.695900   54335 cri.go:89] found id: ""
	I1205 06:31:58.695914   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.695921   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:58.695929   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:58.695939   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:58.751191   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:58.751209   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:58.761861   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:58.761882   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:58.829503   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:58.822607   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.823005   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.824699   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.825076   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.826213   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:58.822607   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.823005   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.824699   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.825076   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.826213   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:58.829513   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:58.829524   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:58.892286   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:58.892304   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:01.420326   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:01.430350   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:01.430415   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:01.455307   54335 cri.go:89] found id: ""
	I1205 06:32:01.455320   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.455328   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:01.455333   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:01.455388   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:01.479758   54335 cri.go:89] found id: ""
	I1205 06:32:01.479771   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.479778   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:01.479784   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:01.479840   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:01.502828   54335 cri.go:89] found id: ""
	I1205 06:32:01.502841   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.502848   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:01.502853   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:01.502908   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:01.528675   54335 cri.go:89] found id: ""
	I1205 06:32:01.528688   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.528698   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:01.528704   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:01.528762   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:01.553405   54335 cri.go:89] found id: ""
	I1205 06:32:01.553419   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.553426   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:01.553431   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:01.553510   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:01.578373   54335 cri.go:89] found id: ""
	I1205 06:32:01.578387   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.578394   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:01.578400   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:01.578464   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:01.603666   54335 cri.go:89] found id: ""
	I1205 06:32:01.603689   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.603697   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:01.603704   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:01.603714   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:01.661152   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:01.661181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:01.672814   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:01.672831   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:01.736722   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:01.729093   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.729657   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731235   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731803   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.733404   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:01.729093   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.729657   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731235   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731803   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.733404   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:01.736731   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:01.736742   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:01.799762   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:01.799780   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:04.328972   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:04.339381   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:04.339441   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:04.365391   54335 cri.go:89] found id: ""
	I1205 06:32:04.365405   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.365412   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:04.365418   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:04.365487   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:04.395556   54335 cri.go:89] found id: ""
	I1205 06:32:04.395570   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.395577   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:04.395582   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:04.395640   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:04.425328   54335 cri.go:89] found id: ""
	I1205 06:32:04.425341   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.425348   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:04.425354   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:04.425420   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:04.450514   54335 cri.go:89] found id: ""
	I1205 06:32:04.450528   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.450536   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:04.450541   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:04.450604   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:04.479372   54335 cri.go:89] found id: ""
	I1205 06:32:04.479386   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.479393   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:04.479398   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:04.479459   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:04.504452   54335 cri.go:89] found id: ""
	I1205 06:32:04.504466   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.504473   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:04.504479   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:04.504539   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:04.529609   54335 cri.go:89] found id: ""
	I1205 06:32:04.529622   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.529629   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:04.529637   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:04.529649   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:04.584301   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:04.584319   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:04.595557   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:04.595572   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:04.660266   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:04.651668   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.652518   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654083   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654557   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.656089   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:04.651668   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.652518   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654083   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654557   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.656089   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:04.660277   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:04.660288   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:04.723098   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:04.723115   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:07.257738   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:07.268081   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:07.268144   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:07.292559   54335 cri.go:89] found id: ""
	I1205 06:32:07.292573   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.292580   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:07.292585   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:07.292645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:07.316782   54335 cri.go:89] found id: ""
	I1205 06:32:07.316796   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.316803   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:07.316809   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:07.316869   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:07.346176   54335 cri.go:89] found id: ""
	I1205 06:32:07.346189   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.346196   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:07.346201   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:07.346263   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:07.378787   54335 cri.go:89] found id: ""
	I1205 06:32:07.378800   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.378807   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:07.378812   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:07.378869   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:07.406652   54335 cri.go:89] found id: ""
	I1205 06:32:07.406666   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.406673   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:07.406678   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:07.406746   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:07.438624   54335 cri.go:89] found id: ""
	I1205 06:32:07.438642   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.438649   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:07.438655   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:07.438726   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:07.464230   54335 cri.go:89] found id: ""
	I1205 06:32:07.464243   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.464250   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:07.464257   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:07.464266   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:07.520945   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:07.520962   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:07.531896   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:07.531911   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:07.598302   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:07.588821   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.589395   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591106   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591684   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.594475   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:07.588821   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.589395   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591106   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591684   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.594475   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:07.598317   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:07.598327   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:07.661122   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:07.661139   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:10.190348   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:10.201225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:10.201307   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:10.230433   54335 cri.go:89] found id: ""
	I1205 06:32:10.230446   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.230453   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:10.230458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:10.230512   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:10.254051   54335 cri.go:89] found id: ""
	I1205 06:32:10.254070   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.254077   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:10.254082   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:10.254140   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:10.278518   54335 cri.go:89] found id: ""
	I1205 06:32:10.278531   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.278538   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:10.278543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:10.278599   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:10.302979   54335 cri.go:89] found id: ""
	I1205 06:32:10.302992   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.302999   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:10.303004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:10.303059   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:10.331316   54335 cri.go:89] found id: ""
	I1205 06:32:10.331330   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.331337   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:10.331341   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:10.331400   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:10.362875   54335 cri.go:89] found id: ""
	I1205 06:32:10.362889   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.362896   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:10.362902   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:10.362959   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:10.393788   54335 cri.go:89] found id: ""
	I1205 06:32:10.393802   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.393810   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:10.393818   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:10.393829   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:10.459886   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:10.452427   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.452934   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454546   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454986   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.456499   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:10.452427   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.452934   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454546   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454986   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.456499   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:10.459895   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:10.459905   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:10.521460   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:10.521481   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:10.549040   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:10.549056   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:10.605396   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:10.605414   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:13.117854   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:13.128117   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:13.128179   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:13.153085   54335 cri.go:89] found id: ""
	I1205 06:32:13.153098   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.153105   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:13.153110   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:13.153199   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:13.178442   54335 cri.go:89] found id: ""
	I1205 06:32:13.178455   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.178462   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:13.178467   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:13.178524   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:13.203207   54335 cri.go:89] found id: ""
	I1205 06:32:13.203220   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.203229   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:13.203234   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:13.203292   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:13.228073   54335 cri.go:89] found id: ""
	I1205 06:32:13.228086   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.228093   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:13.228098   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:13.228159   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:13.253259   54335 cri.go:89] found id: ""
	I1205 06:32:13.253272   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.253288   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:13.253293   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:13.253350   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:13.278480   54335 cri.go:89] found id: ""
	I1205 06:32:13.278493   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.278500   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:13.278506   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:13.278562   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:13.301934   54335 cri.go:89] found id: ""
	I1205 06:32:13.301948   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.301955   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:13.301962   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:13.301972   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:13.356855   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:13.356876   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:13.368331   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:13.368352   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:13.438131   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:13.429738   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.430489   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432231   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432823   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.434562   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:13.429738   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.430489   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432231   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432823   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.434562   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:13.438141   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:13.438151   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:13.501680   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:13.501699   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:16.032304   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:16.042939   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:16.043006   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:16.069762   54335 cri.go:89] found id: ""
	I1205 06:32:16.069775   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.069782   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:16.069788   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:16.069844   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:16.094242   54335 cri.go:89] found id: ""
	I1205 06:32:16.094255   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.094264   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:16.094270   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:16.094336   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:16.120352   54335 cri.go:89] found id: ""
	I1205 06:32:16.120366   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.120373   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:16.120378   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:16.120435   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:16.149183   54335 cri.go:89] found id: ""
	I1205 06:32:16.149196   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.149203   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:16.149208   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:16.149270   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:16.179309   54335 cri.go:89] found id: ""
	I1205 06:32:16.179322   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.179328   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:16.179333   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:16.179388   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:16.204104   54335 cri.go:89] found id: ""
	I1205 06:32:16.204118   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.204125   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:16.204130   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:16.204190   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:16.230914   54335 cri.go:89] found id: ""
	I1205 06:32:16.230927   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.230934   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:16.230941   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:16.230950   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:16.286405   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:16.286423   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:16.297122   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:16.297136   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:16.367421   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:16.357623   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.358522   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360113   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360727   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.362335   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:16.357623   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.358522   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360113   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360727   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.362335   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:16.367430   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:16.367442   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:16.452050   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:16.452076   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:18.982231   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:18.992354   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:18.992412   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:19.017989   54335 cri.go:89] found id: ""
	I1205 06:32:19.018004   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.018011   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:19.018016   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:19.018077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:19.042217   54335 cri.go:89] found id: ""
	I1205 06:32:19.042230   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.042237   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:19.042242   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:19.042301   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:19.066699   54335 cri.go:89] found id: ""
	I1205 06:32:19.066713   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.066720   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:19.066725   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:19.066785   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:19.095590   54335 cri.go:89] found id: ""
	I1205 06:32:19.095603   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.095610   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:19.095616   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:19.095672   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:19.119155   54335 cri.go:89] found id: ""
	I1205 06:32:19.119169   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.119176   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:19.119181   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:19.119237   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:19.142787   54335 cri.go:89] found id: ""
	I1205 06:32:19.142801   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.142807   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:19.142813   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:19.142873   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:19.168013   54335 cri.go:89] found id: ""
	I1205 06:32:19.168025   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.168032   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:19.168039   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:19.168051   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:19.178464   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:19.178481   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:19.240233   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:19.233298   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.233706   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235213   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235526   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.236960   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:19.233298   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.233706   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235213   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235526   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.236960   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:19.240244   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:19.240253   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:19.300198   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:19.300217   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:19.329682   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:19.329697   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:21.888551   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:21.898274   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:21.898337   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:21.922474   54335 cri.go:89] found id: ""
	I1205 06:32:21.922486   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.922493   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:21.922498   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:21.922558   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:21.950761   54335 cri.go:89] found id: ""
	I1205 06:32:21.950775   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.950781   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:21.950786   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:21.950844   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:21.973829   54335 cri.go:89] found id: ""
	I1205 06:32:21.973843   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.973849   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:21.973854   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:21.973912   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:21.997620   54335 cri.go:89] found id: ""
	I1205 06:32:21.997634   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.997641   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:21.997647   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:21.997702   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:22.033207   54335 cri.go:89] found id: ""
	I1205 06:32:22.033221   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.033228   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:22.033234   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:22.033296   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:22.062888   54335 cri.go:89] found id: ""
	I1205 06:32:22.062902   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.062909   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:22.062915   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:22.062973   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:22.091975   54335 cri.go:89] found id: ""
	I1205 06:32:22.091989   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.091996   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:22.092004   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:22.092017   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:22.103145   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:22.103160   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:22.164851   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:22.156849   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.157640   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159268   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159573   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.161063   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:22.156849   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.157640   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159268   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159573   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.161063   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:22.164860   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:22.164870   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:22.226105   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:22.226124   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:22.253915   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:22.253929   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:24.811993   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:24.821806   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:24.821865   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:24.845836   54335 cri.go:89] found id: ""
	I1205 06:32:24.845850   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.845857   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:24.845864   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:24.845919   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:24.870475   54335 cri.go:89] found id: ""
	I1205 06:32:24.870489   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.870496   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:24.870505   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:24.870560   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:24.895049   54335 cri.go:89] found id: ""
	I1205 06:32:24.895061   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.895068   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:24.895074   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:24.895130   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:24.924307   54335 cri.go:89] found id: ""
	I1205 06:32:24.924320   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.924327   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:24.924332   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:24.924390   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:24.949595   54335 cri.go:89] found id: ""
	I1205 06:32:24.949608   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.949616   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:24.949621   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:24.949680   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:24.974582   54335 cri.go:89] found id: ""
	I1205 06:32:24.974595   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.974602   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:24.974607   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:24.974664   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:25.003723   54335 cri.go:89] found id: ""
	I1205 06:32:25.003739   54335 logs.go:282] 0 containers: []
	W1205 06:32:25.003747   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:25.003755   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:25.003766   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:25.065829   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:25.065846   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:25.077220   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:25.077236   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:25.140111   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:25.132731   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.133376   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.134862   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.135186   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.136712   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:25.132731   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.133376   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.134862   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.135186   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.136712   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:25.140121   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:25.140135   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:25.206118   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:25.206137   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:27.733938   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:27.744224   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:27.744282   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:27.769011   54335 cri.go:89] found id: ""
	I1205 06:32:27.769024   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.769031   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:27.769036   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:27.769094   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:27.793434   54335 cri.go:89] found id: ""
	I1205 06:32:27.793448   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.793455   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:27.793460   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:27.793556   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:27.821088   54335 cri.go:89] found id: ""
	I1205 06:32:27.821101   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.821108   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:27.821112   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:27.821209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:27.847229   54335 cri.go:89] found id: ""
	I1205 06:32:27.847242   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.847249   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:27.847254   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:27.847310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:27.870944   54335 cri.go:89] found id: ""
	I1205 06:32:27.870958   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.870965   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:27.870970   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:27.871031   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:27.895361   54335 cri.go:89] found id: ""
	I1205 06:32:27.895375   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.895382   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:27.895388   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:27.895445   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:27.920868   54335 cri.go:89] found id: ""
	I1205 06:32:27.920881   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.920888   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:27.920897   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:27.920908   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:27.984326   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:27.984346   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:28.018053   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:28.018070   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:28.075646   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:28.075663   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:28.087097   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:28.087112   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:28.151403   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:28.143072   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.143826   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.145655   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.146333   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.147993   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:28.143072   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.143826   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.145655   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.146333   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.147993   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:30.651598   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:30.661458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:30.661527   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:30.689413   54335 cri.go:89] found id: ""
	I1205 06:32:30.689426   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.689443   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:30.689450   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:30.689523   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:30.712971   54335 cri.go:89] found id: ""
	I1205 06:32:30.712987   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.712994   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:30.712999   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:30.713057   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:30.737851   54335 cri.go:89] found id: ""
	I1205 06:32:30.737871   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.737879   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:30.737884   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:30.737945   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:30.761745   54335 cri.go:89] found id: ""
	I1205 06:32:30.761759   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.761766   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:30.761771   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:30.761836   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:30.784898   54335 cri.go:89] found id: ""
	I1205 06:32:30.784912   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.784919   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:30.784924   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:30.784980   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:30.810894   54335 cri.go:89] found id: ""
	I1205 06:32:30.810908   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.810915   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:30.810920   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:30.810976   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:30.839604   54335 cri.go:89] found id: ""
	I1205 06:32:30.839617   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.839623   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:30.839636   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:30.839647   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:30.865641   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:30.865658   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:30.921606   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:30.921625   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:30.932281   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:30.932297   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:30.995168   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:30.987715   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.988222   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.989765   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.990119   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.991752   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:30.987715   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.988222   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.989765   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.990119   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.991752   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:30.995177   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:30.995187   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:33.558401   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:33.568813   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:33.568893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:33.596483   54335 cri.go:89] found id: ""
	I1205 06:32:33.596496   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.596503   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:33.596508   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:33.596566   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:33.624025   54335 cri.go:89] found id: ""
	I1205 06:32:33.624039   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.624046   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:33.624051   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:33.624108   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:33.655953   54335 cri.go:89] found id: ""
	I1205 06:32:33.655966   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.655974   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:33.655979   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:33.656039   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:33.684431   54335 cri.go:89] found id: ""
	I1205 06:32:33.684445   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.684452   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:33.684458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:33.684517   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:33.710631   54335 cri.go:89] found id: ""
	I1205 06:32:33.710644   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.710651   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:33.710656   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:33.710714   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:33.735367   54335 cri.go:89] found id: ""
	I1205 06:32:33.735380   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.735387   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:33.735393   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:33.735450   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:33.759636   54335 cri.go:89] found id: ""
	I1205 06:32:33.759650   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.759657   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:33.759664   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:33.759675   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:33.814547   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:33.814565   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:33.825805   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:33.825820   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:33.891604   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:33.884022   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.884634   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886235   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886812   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.888278   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:33.884022   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.884634   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886235   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886812   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.888278   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:33.891614   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:33.891624   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:33.953767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:33.953787   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:36.482228   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:36.492694   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:36.492753   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:36.518206   54335 cri.go:89] found id: ""
	I1205 06:32:36.518222   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.518229   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:36.518233   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:36.518290   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:36.543531   54335 cri.go:89] found id: ""
	I1205 06:32:36.543544   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.543551   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:36.543556   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:36.543615   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:36.567286   54335 cri.go:89] found id: ""
	I1205 06:32:36.567299   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.567306   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:36.567311   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:36.567367   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:36.592165   54335 cri.go:89] found id: ""
	I1205 06:32:36.592178   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.592185   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:36.592190   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:36.592246   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:36.621238   54335 cri.go:89] found id: ""
	I1205 06:32:36.621251   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.621258   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:36.621264   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:36.621329   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:36.646816   54335 cri.go:89] found id: ""
	I1205 06:32:36.646838   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.646845   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:36.646850   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:36.646917   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:36.672562   54335 cri.go:89] found id: ""
	I1205 06:32:36.672575   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.672582   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:36.672599   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:36.672609   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:36.727909   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:36.727926   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:36.738625   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:36.738641   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:36.803851   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:36.795935   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.796356   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.797950   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.798308   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.800017   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:36.795935   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.796356   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.797950   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.798308   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.800017   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:36.803861   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:36.803872   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:36.865831   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:36.865849   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:39.393852   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:39.404022   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:39.404090   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:39.433108   54335 cri.go:89] found id: ""
	I1205 06:32:39.433122   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.433129   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:39.433134   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:39.433218   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:39.458840   54335 cri.go:89] found id: ""
	I1205 06:32:39.458853   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.458862   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:39.458867   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:39.458923   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:39.483121   54335 cri.go:89] found id: ""
	I1205 06:32:39.483135   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.483142   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:39.483147   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:39.483203   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:39.508080   54335 cri.go:89] found id: ""
	I1205 06:32:39.508092   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.508100   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:39.508107   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:39.508166   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:39.532483   54335 cri.go:89] found id: ""
	I1205 06:32:39.532496   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.532503   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:39.532508   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:39.532563   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:39.556203   54335 cri.go:89] found id: ""
	I1205 06:32:39.556217   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.556224   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:39.556229   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:39.556286   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:39.579787   54335 cri.go:89] found id: ""
	I1205 06:32:39.579802   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.579809   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:39.579818   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:39.579828   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:39.644828   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:39.644847   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:39.657327   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:39.657341   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:39.724034   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:39.716361   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.716905   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718372   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718880   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.720306   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:39.716361   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.716905   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718372   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718880   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.720306   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:39.724044   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:39.724054   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:39.786205   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:39.786224   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:42.317043   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:42.327925   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:42.327988   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:42.353925   54335 cri.go:89] found id: ""
	I1205 06:32:42.353939   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.353946   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:42.353952   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:42.354013   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:42.385300   54335 cri.go:89] found id: ""
	I1205 06:32:42.385314   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.385321   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:42.385326   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:42.385385   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:42.411306   54335 cri.go:89] found id: ""
	I1205 06:32:42.411319   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.411326   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:42.411331   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:42.411389   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:42.436499   54335 cri.go:89] found id: ""
	I1205 06:32:42.436513   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.436520   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:42.436526   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:42.436590   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:42.461983   54335 cri.go:89] found id: ""
	I1205 06:32:42.462000   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.462008   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:42.462013   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:42.462072   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:42.490948   54335 cri.go:89] found id: ""
	I1205 06:32:42.490962   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.490971   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:42.490976   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:42.491036   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:42.515766   54335 cri.go:89] found id: ""
	I1205 06:32:42.515785   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.515793   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:42.515800   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:42.515810   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:42.571249   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:42.571267   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:42.582146   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:42.582161   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:42.671227   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:42.659945   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.660555   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.665688   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.666233   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.667791   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:42.659945   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.660555   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.665688   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.666233   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.667791   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:42.671236   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:42.671247   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:42.733761   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:42.733780   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:45.261718   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:45.276631   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:45.276700   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:45.305280   54335 cri.go:89] found id: ""
	I1205 06:32:45.305296   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.305304   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:45.305309   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:45.305375   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:45.332314   54335 cri.go:89] found id: ""
	I1205 06:32:45.332407   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.332482   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:45.332488   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:45.332551   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:45.368080   54335 cri.go:89] found id: ""
	I1205 06:32:45.368141   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.368165   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:45.368171   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:45.368336   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:45.400257   54335 cri.go:89] found id: ""
	I1205 06:32:45.400284   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.400292   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:45.400298   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:45.400368   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:45.425301   54335 cri.go:89] found id: ""
	I1205 06:32:45.425314   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.425321   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:45.425327   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:45.425385   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:45.450756   54335 cri.go:89] found id: ""
	I1205 06:32:45.450769   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.450777   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:45.450782   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:45.450845   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:45.481391   54335 cri.go:89] found id: ""
	I1205 06:32:45.481405   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.481413   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:45.481421   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:45.481441   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:45.539446   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:45.539465   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:45.550849   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:45.550865   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:45.628789   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:45.621303   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.621758   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623274   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623572   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.625028   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:45.621303   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.621758   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623274   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623572   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.625028   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:45.628800   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:45.628810   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:45.699540   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:45.699558   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:48.227049   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:48.237481   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:48.237550   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:48.267696   54335 cri.go:89] found id: ""
	I1205 06:32:48.267709   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.267716   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:48.267721   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:48.267789   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:48.294097   54335 cri.go:89] found id: ""
	I1205 06:32:48.294112   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.294118   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:48.294124   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:48.294186   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:48.324117   54335 cri.go:89] found id: ""
	I1205 06:32:48.324131   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.324139   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:48.324144   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:48.324203   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:48.349743   54335 cri.go:89] found id: ""
	I1205 06:32:48.349758   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.349765   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:48.349781   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:48.349849   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:48.379197   54335 cri.go:89] found id: ""
	I1205 06:32:48.379211   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.379219   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:48.379225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:48.379283   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:48.404472   54335 cri.go:89] found id: ""
	I1205 06:32:48.404486   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.404493   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:48.404499   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:48.404555   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:48.430058   54335 cri.go:89] found id: ""
	I1205 06:32:48.430072   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.430079   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:48.430086   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:48.430099   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:48.459503   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:48.459519   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:48.518141   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:48.518158   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:48.529014   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:48.529031   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:48.601337   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:48.590875   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.591327   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.593747   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.595586   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.596337   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:48.590875   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.591327   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.593747   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.595586   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.596337   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:48.601347   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:48.601357   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:51.177615   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:51.187543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:51.187599   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:51.218589   54335 cri.go:89] found id: ""
	I1205 06:32:51.218603   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.218610   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:51.218615   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:51.218673   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:51.243490   54335 cri.go:89] found id: ""
	I1205 06:32:51.243509   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.243516   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:51.243521   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:51.243577   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:51.268372   54335 cri.go:89] found id: ""
	I1205 06:32:51.268385   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.268393   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:51.268398   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:51.268458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:51.292432   54335 cri.go:89] found id: ""
	I1205 06:32:51.292445   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.292452   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:51.292457   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:51.292513   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:51.316338   54335 cri.go:89] found id: ""
	I1205 06:32:51.316351   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.316358   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:51.316364   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:51.316419   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:51.341611   54335 cri.go:89] found id: ""
	I1205 06:32:51.341625   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.341645   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:51.341650   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:51.341708   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:51.365650   54335 cri.go:89] found id: ""
	I1205 06:32:51.365664   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.365671   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:51.365679   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:51.365690   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:51.377639   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:51.377655   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:51.443518   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:51.435665   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.436407   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438103   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438498   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.439930   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:51.435665   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.436407   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438103   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438498   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.439930   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:51.443527   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:51.443540   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:51.505744   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:51.505763   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:51.532869   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:51.532884   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:54.096225   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:54.106698   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:54.106760   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:54.134689   54335 cri.go:89] found id: ""
	I1205 06:32:54.134702   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.134709   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:54.134714   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:54.134769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:54.158113   54335 cri.go:89] found id: ""
	I1205 06:32:54.158126   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.158133   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:54.158138   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:54.158199   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:54.182422   54335 cri.go:89] found id: ""
	I1205 06:32:54.182436   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.182444   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:54.182448   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:54.182508   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:54.206399   54335 cri.go:89] found id: ""
	I1205 06:32:54.206412   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.206418   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:54.206423   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:54.206481   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:54.229926   54335 cri.go:89] found id: ""
	I1205 06:32:54.229940   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.229947   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:54.229952   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:54.230011   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:54.254356   54335 cri.go:89] found id: ""
	I1205 06:32:54.254370   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.254377   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:54.254382   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:54.254441   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:54.278495   54335 cri.go:89] found id: ""
	I1205 06:32:54.278508   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.278516   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:54.278523   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:54.278533   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:54.305603   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:54.305619   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:54.360184   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:54.360202   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:54.371510   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:54.371525   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:54.438927   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:54.429388   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.430239   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.432334   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.433110   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.435152   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:54.429388   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.430239   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.432334   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.433110   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.435152   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:54.438936   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:54.438947   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:57.002913   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:57.020172   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:57.020235   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:57.044543   54335 cri.go:89] found id: ""
	I1205 06:32:57.044556   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.044564   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:57.044570   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:57.044629   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:57.070053   54335 cri.go:89] found id: ""
	I1205 06:32:57.070067   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.070074   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:57.070079   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:57.070134   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:57.094644   54335 cri.go:89] found id: ""
	I1205 06:32:57.094659   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.094666   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:57.094670   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:57.094769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:57.118698   54335 cri.go:89] found id: ""
	I1205 06:32:57.118722   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.118729   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:57.118734   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:57.118799   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:57.142854   54335 cri.go:89] found id: ""
	I1205 06:32:57.142868   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.142875   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:57.142881   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:57.142946   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:57.171220   54335 cri.go:89] found id: ""
	I1205 06:32:57.171234   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.171241   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:57.171246   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:57.171311   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:57.195529   54335 cri.go:89] found id: ""
	I1205 06:32:57.195544   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.195551   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:57.195558   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:57.195578   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:57.251284   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:57.251305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:57.262555   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:57.262570   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:57.333629   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:57.326387   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.326886   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328440   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328930   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.330375   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:57.326387   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.326886   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328440   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328930   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.330375   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:57.333638   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:57.333651   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:57.394773   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:57.394791   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:59.923047   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:59.933128   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:59.933207   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:59.960876   54335 cri.go:89] found id: ""
	I1205 06:32:59.960890   54335 logs.go:282] 0 containers: []
	W1205 06:32:59.960896   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:59.960901   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:59.960961   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:59.985649   54335 cri.go:89] found id: ""
	I1205 06:32:59.985664   54335 logs.go:282] 0 containers: []
	W1205 06:32:59.985671   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:59.985676   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:59.985737   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:00.069985   54335 cri.go:89] found id: ""
	I1205 06:33:00.070002   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.070019   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:00.070026   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:00.070103   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:00.156917   54335 cri.go:89] found id: ""
	I1205 06:33:00.156936   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.156945   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:00.156958   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:00.157043   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:00.284647   54335 cri.go:89] found id: ""
	I1205 06:33:00.284663   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.284672   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:00.284678   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:00.284758   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:00.335248   54335 cri.go:89] found id: ""
	I1205 06:33:00.335263   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.335271   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:00.335280   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:00.335365   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:00.377235   54335 cri.go:89] found id: ""
	I1205 06:33:00.377251   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.377259   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:00.377267   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:00.377291   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:00.390543   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:00.390561   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:00.464312   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:00.454965   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.455845   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.457669   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.458537   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.460402   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:00.454965   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.455845   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.457669   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.458537   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.460402   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:00.464323   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:00.464334   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:00.528767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:00.528786   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:00.562265   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:00.562282   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:03.126784   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:03.137248   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:03.137309   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:03.163136   54335 cri.go:89] found id: ""
	I1205 06:33:03.163149   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.163156   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:03.163161   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:03.163221   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:03.189239   54335 cri.go:89] found id: ""
	I1205 06:33:03.189253   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.189261   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:03.189277   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:03.189340   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:03.215019   54335 cri.go:89] found id: ""
	I1205 06:33:03.215032   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.215039   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:03.215045   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:03.215104   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:03.240336   54335 cri.go:89] found id: ""
	I1205 06:33:03.240350   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.240357   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:03.240362   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:03.240421   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:03.264735   54335 cri.go:89] found id: ""
	I1205 06:33:03.264749   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.264762   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:03.264767   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:03.264831   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:03.289528   54335 cri.go:89] found id: ""
	I1205 06:33:03.289541   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.289548   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:03.289553   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:03.289658   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:03.315032   54335 cri.go:89] found id: ""
	I1205 06:33:03.315046   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.315053   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:03.315060   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:03.315071   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:03.371569   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:03.371588   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:03.382809   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:03.382825   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:03.450556   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:03.442547   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.443142   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445000   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445833   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.446990   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:03.442547   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.443142   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445000   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445833   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.446990   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:03.450566   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:03.450577   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:03.516929   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:03.516948   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:06.046009   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:06.057281   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:06.057355   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:06.084601   54335 cri.go:89] found id: ""
	I1205 06:33:06.084615   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.084623   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:06.084629   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:06.084690   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:06.111286   54335 cri.go:89] found id: ""
	I1205 06:33:06.111300   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.111307   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:06.111313   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:06.111374   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:06.136965   54335 cri.go:89] found id: ""
	I1205 06:33:06.136978   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.136985   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:06.136990   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:06.137048   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:06.162299   54335 cri.go:89] found id: ""
	I1205 06:33:06.162312   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.162319   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:06.162325   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:06.162387   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:06.189555   54335 cri.go:89] found id: ""
	I1205 06:33:06.189569   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.189576   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:06.189581   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:06.189645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:06.215170   54335 cri.go:89] found id: ""
	I1205 06:33:06.215184   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.215192   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:06.215198   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:06.215258   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:06.241073   54335 cri.go:89] found id: ""
	I1205 06:33:06.241087   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.241094   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:06.241112   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:06.241123   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:06.296188   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:06.296205   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:06.306926   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:06.306941   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:06.371295   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:06.363444   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.364162   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.365700   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.366364   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.367956   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:06.363444   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.364162   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.365700   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.366364   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.367956   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:06.371304   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:06.371316   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:06.432933   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:06.432951   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:08.969294   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:08.979402   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:08.979463   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:09.020683   54335 cri.go:89] found id: ""
	I1205 06:33:09.020697   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.020704   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:09.020710   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:09.020771   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:09.046109   54335 cri.go:89] found id: ""
	I1205 06:33:09.046123   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.046130   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:09.046136   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:09.046195   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:09.070968   54335 cri.go:89] found id: ""
	I1205 06:33:09.070981   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.070988   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:09.070995   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:09.071056   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:09.096098   54335 cri.go:89] found id: ""
	I1205 06:33:09.096111   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.096118   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:09.096123   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:09.096226   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:09.121468   54335 cri.go:89] found id: ""
	I1205 06:33:09.121482   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.121489   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:09.121495   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:09.121573   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:09.150975   54335 cri.go:89] found id: ""
	I1205 06:33:09.150989   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.150997   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:09.151004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:09.151063   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:09.176504   54335 cri.go:89] found id: ""
	I1205 06:33:09.176517   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.176527   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:09.176534   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:09.176545   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:09.203288   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:09.203302   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:09.259402   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:09.259423   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:09.270454   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:09.270470   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:09.334084   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:09.326438   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.326872   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328506   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328861   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.330463   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:09.326438   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.326872   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328506   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328861   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.330463   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:09.334095   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:09.334105   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:11.894816   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:11.904810   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:11.904871   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:11.930015   54335 cri.go:89] found id: ""
	I1205 06:33:11.930029   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.930036   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:11.930042   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:11.930100   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:11.954795   54335 cri.go:89] found id: ""
	I1205 06:33:11.954808   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.954815   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:11.954821   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:11.954877   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:11.978195   54335 cri.go:89] found id: ""
	I1205 06:33:11.978208   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.978231   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:11.978236   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:11.978292   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:12.003210   54335 cri.go:89] found id: ""
	I1205 06:33:12.003227   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.003235   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:12.003241   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:12.003326   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:12.033020   54335 cri.go:89] found id: ""
	I1205 06:33:12.033034   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.033041   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:12.033046   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:12.033111   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:12.058060   54335 cri.go:89] found id: ""
	I1205 06:33:12.058073   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.058081   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:12.058086   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:12.058143   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:12.082699   54335 cri.go:89] found id: ""
	I1205 06:33:12.082713   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.082719   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:12.082727   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:12.082737   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:12.151250   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:12.142947   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.143602   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145353   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145952   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.147593   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:12.142947   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.143602   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145353   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145952   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.147593   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:12.151259   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:12.151271   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:12.218438   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:12.218461   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:12.248241   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:12.248260   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:12.307820   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:12.307838   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:14.820623   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:14.830697   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:14.830756   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:14.863478   54335 cri.go:89] found id: ""
	I1205 06:33:14.863492   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.863499   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:14.863504   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:14.863565   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:14.895084   54335 cri.go:89] found id: ""
	I1205 06:33:14.895098   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.895106   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:14.895111   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:14.895172   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:14.925468   54335 cri.go:89] found id: ""
	I1205 06:33:14.925482   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.925489   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:14.925494   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:14.925614   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:14.954925   54335 cri.go:89] found id: ""
	I1205 06:33:14.954938   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.954945   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:14.954950   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:14.955009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:14.980066   54335 cri.go:89] found id: ""
	I1205 06:33:14.980080   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.980088   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:14.980093   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:14.980152   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:15.028743   54335 cri.go:89] found id: ""
	I1205 06:33:15.028763   54335 logs.go:282] 0 containers: []
	W1205 06:33:15.028770   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:15.028777   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:15.028845   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:15.057623   54335 cri.go:89] found id: ""
	I1205 06:33:15.057636   54335 logs.go:282] 0 containers: []
	W1205 06:33:15.057643   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:15.057650   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:15.057661   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:15.114789   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:15.114808   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:15.126224   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:15.126240   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:15.193033   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:15.184929   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.185789   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187491   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187821   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.189530   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:15.184929   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.185789   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187491   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187821   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.189530   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:15.193044   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:15.193054   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:15.256748   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:15.256767   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:17.786454   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:17.796729   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:17.796787   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:17.825815   54335 cri.go:89] found id: ""
	I1205 06:33:17.825828   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.825835   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:17.825840   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:17.825900   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:17.855661   54335 cri.go:89] found id: ""
	I1205 06:33:17.855675   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.855682   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:17.855687   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:17.855744   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:17.883175   54335 cri.go:89] found id: ""
	I1205 06:33:17.883188   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.883195   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:17.883200   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:17.883260   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:17.911578   54335 cri.go:89] found id: ""
	I1205 06:33:17.911592   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.911599   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:17.911604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:17.911662   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:17.939731   54335 cri.go:89] found id: ""
	I1205 06:33:17.939750   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.939758   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:17.939763   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:17.939818   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:17.968310   54335 cri.go:89] found id: ""
	I1205 06:33:17.968323   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.968330   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:17.968335   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:17.968392   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:17.992739   54335 cri.go:89] found id: ""
	I1205 06:33:17.992752   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.992759   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:17.992765   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:17.992776   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:18.006966   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:18.006985   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:18.077932   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:18.067988   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.068697   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.071196   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.072003   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.073192   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:18.067988   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.068697   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.071196   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.072003   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.073192   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:18.077943   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:18.077954   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:18.141190   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:18.141206   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:18.172978   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:18.172995   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:20.730714   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:20.741267   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:20.741329   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:20.765738   54335 cri.go:89] found id: ""
	I1205 06:33:20.765751   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.765758   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:20.765763   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:20.765821   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:20.790360   54335 cri.go:89] found id: ""
	I1205 06:33:20.790373   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.790380   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:20.790385   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:20.790446   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:20.815276   54335 cri.go:89] found id: ""
	I1205 06:33:20.815290   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.815297   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:20.815302   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:20.815361   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:20.840257   54335 cri.go:89] found id: ""
	I1205 06:33:20.840270   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.840277   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:20.840283   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:20.840345   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:20.869989   54335 cri.go:89] found id: ""
	I1205 06:33:20.870003   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.870010   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:20.870015   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:20.870077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:20.908890   54335 cri.go:89] found id: ""
	I1205 06:33:20.908903   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.908915   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:20.908921   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:20.908978   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:20.935421   54335 cri.go:89] found id: ""
	I1205 06:33:20.935435   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.935442   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:20.935450   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:20.935460   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:20.946582   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:20.946597   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:21.010138   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:20.999742   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.000436   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.002782   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.003697   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.005641   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:20.999742   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.000436   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.002782   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.003697   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.005641   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:21.010149   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:21.010172   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:21.077392   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:21.077409   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:21.105240   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:21.105255   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:23.662909   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:23.672961   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:23.673022   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:23.697989   54335 cri.go:89] found id: ""
	I1205 06:33:23.698003   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.698010   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:23.698016   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:23.698078   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:23.723698   54335 cri.go:89] found id: ""
	I1205 06:33:23.723712   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.723718   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:23.723723   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:23.723781   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:23.747403   54335 cri.go:89] found id: ""
	I1205 06:33:23.747416   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.747423   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:23.747428   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:23.747486   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:23.775201   54335 cri.go:89] found id: ""
	I1205 06:33:23.775214   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.775221   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:23.775227   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:23.775290   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:23.799494   54335 cri.go:89] found id: ""
	I1205 06:33:23.799507   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.799514   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:23.799519   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:23.799575   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:23.824229   54335 cri.go:89] found id: ""
	I1205 06:33:23.824242   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.824249   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:23.824254   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:23.824310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:23.851738   54335 cri.go:89] found id: ""
	I1205 06:33:23.851752   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.851759   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:23.851767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:23.851777   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:23.897695   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:23.897710   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:23.961464   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:23.961482   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:23.972542   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:23.972558   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:24.046391   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:24.038441   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.039274   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.040964   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.041464   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.043066   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:24.038441   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.039274   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.040964   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.041464   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.043066   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:24.046402   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:24.046414   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:26.611978   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:26.621743   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:26.621802   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:26.645855   54335 cri.go:89] found id: ""
	I1205 06:33:26.645868   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.645875   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:26.645879   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:26.645934   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:26.675349   54335 cri.go:89] found id: ""
	I1205 06:33:26.675363   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.675369   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:26.675374   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:26.675430   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:26.698540   54335 cri.go:89] found id: ""
	I1205 06:33:26.698554   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.698561   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:26.698566   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:26.698630   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:26.721264   54335 cri.go:89] found id: ""
	I1205 06:33:26.721277   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.721283   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:26.721288   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:26.721343   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:26.744526   54335 cri.go:89] found id: ""
	I1205 06:33:26.744539   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.744546   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:26.744551   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:26.744607   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:26.767695   54335 cri.go:89] found id: ""
	I1205 06:33:26.767719   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.767727   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:26.767732   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:26.767792   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:26.791289   54335 cri.go:89] found id: ""
	I1205 06:33:26.791329   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.791336   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:26.791344   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:26.791354   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:26.856152   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:26.845400   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.846423   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.848401   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.849234   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.850202   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:26.845400   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.846423   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.848401   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.849234   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.850202   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:26.856162   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:26.856173   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:26.930967   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:26.930987   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:26.958183   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:26.958200   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:27.015910   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:27.015927   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:29.527097   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:29.537027   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:29.537087   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:29.561570   54335 cri.go:89] found id: ""
	I1205 06:33:29.561583   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.561591   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:29.561598   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:29.561655   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:29.586431   54335 cri.go:89] found id: ""
	I1205 06:33:29.586445   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.586452   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:29.586474   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:29.586543   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:29.615124   54335 cri.go:89] found id: ""
	I1205 06:33:29.615139   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.615145   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:29.615151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:29.615208   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:29.640801   54335 cri.go:89] found id: ""
	I1205 06:33:29.640814   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.640831   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:29.640837   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:29.640893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:29.665711   54335 cri.go:89] found id: ""
	I1205 06:33:29.665725   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.665731   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:29.665737   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:29.665797   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:29.690393   54335 cri.go:89] found id: ""
	I1205 06:33:29.690416   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.690423   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:29.690428   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:29.690500   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:29.714522   54335 cri.go:89] found id: ""
	I1205 06:33:29.714535   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.714542   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:29.714550   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:29.714562   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:29.770787   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:29.770804   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:29.781149   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:29.781179   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:29.848588   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:29.838965   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.839369   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.840958   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.841406   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.842881   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:29.838965   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.839369   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.840958   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.841406   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.842881   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:29.848601   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:29.848612   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:29.927646   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:29.927665   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:32.455807   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:32.466055   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:32.466118   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:32.490796   54335 cri.go:89] found id: ""
	I1205 06:33:32.490809   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.490816   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:32.490822   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:32.490881   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:32.515490   54335 cri.go:89] found id: ""
	I1205 06:33:32.515503   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.515511   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:32.515516   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:32.515577   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:32.543147   54335 cri.go:89] found id: ""
	I1205 06:33:32.543161   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.543167   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:32.543172   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:32.543234   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:32.567288   54335 cri.go:89] found id: ""
	I1205 06:33:32.567301   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.567308   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:32.567313   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:32.567370   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:32.594765   54335 cri.go:89] found id: ""
	I1205 06:33:32.594778   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.594785   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:32.594790   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:32.594846   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:32.628174   54335 cri.go:89] found id: ""
	I1205 06:33:32.628187   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.628208   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:32.628223   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:32.628310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:32.653204   54335 cri.go:89] found id: ""
	I1205 06:33:32.653218   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.653225   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:32.653232   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:32.653242   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:32.713436   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:32.713452   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:32.723879   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:32.723894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:32.788746   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:32.780259   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.780873   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.782713   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.783204   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.784855   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:32.780259   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.780873   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.782713   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.783204   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.784855   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:32.788757   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:32.788767   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:32.850792   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:32.850809   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:35.388187   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:35.398195   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:35.398254   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:35.421975   54335 cri.go:89] found id: ""
	I1205 06:33:35.421989   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.421996   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:35.422002   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:35.422065   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:35.445920   54335 cri.go:89] found id: ""
	I1205 06:33:35.445934   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.445942   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:35.445947   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:35.446009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:35.471144   54335 cri.go:89] found id: ""
	I1205 06:33:35.471157   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.471164   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:35.471169   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:35.471231   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:35.495788   54335 cri.go:89] found id: ""
	I1205 06:33:35.495802   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.495808   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:35.495814   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:35.495871   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:35.524598   54335 cri.go:89] found id: ""
	I1205 06:33:35.524621   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.524628   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:35.524633   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:35.524701   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:35.549143   54335 cri.go:89] found id: ""
	I1205 06:33:35.549227   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.549235   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:35.549242   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:35.549301   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:35.574312   54335 cri.go:89] found id: ""
	I1205 06:33:35.574325   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.574332   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:35.574340   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:35.574352   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:35.628890   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:35.628908   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:35.639919   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:35.639934   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:35.703264   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:35.695689   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.696286   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.697814   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.698255   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.699741   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:35.695689   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.696286   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.697814   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.698255   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.699741   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:35.703273   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:35.703286   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:35.766049   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:35.766067   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:38.297790   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:38.307702   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:38.307762   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:38.336326   54335 cri.go:89] found id: ""
	I1205 06:33:38.336340   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.336348   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:38.336353   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:38.336410   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:38.361342   54335 cri.go:89] found id: ""
	I1205 06:33:38.361356   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.361363   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:38.361371   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:38.361429   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:38.385186   54335 cri.go:89] found id: ""
	I1205 06:33:38.385200   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.385208   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:38.385213   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:38.385281   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:38.413803   54335 cri.go:89] found id: ""
	I1205 06:33:38.413816   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.413824   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:38.413829   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:38.413889   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:38.437536   54335 cri.go:89] found id: ""
	I1205 06:33:38.437572   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.437579   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:38.437585   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:38.437645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:38.462979   54335 cri.go:89] found id: ""
	I1205 06:33:38.462993   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.463000   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:38.463006   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:38.463069   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:38.488151   54335 cri.go:89] found id: ""
	I1205 06:33:38.488163   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.488170   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:38.488186   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:38.488196   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:38.544680   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:38.544696   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:38.555626   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:38.555641   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:38.618692   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:38.610579   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.611054   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.612674   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.613205   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.614695   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:38.610579   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.611054   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.612674   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.613205   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.614695   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:38.618701   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:38.618712   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:38.682609   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:38.682629   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:41.211631   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:41.221454   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:41.221514   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:41.245435   54335 cri.go:89] found id: ""
	I1205 06:33:41.245448   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.245455   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:41.245460   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:41.245516   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:41.268900   54335 cri.go:89] found id: ""
	I1205 06:33:41.268913   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.268920   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:41.268925   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:41.268980   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:41.297438   54335 cri.go:89] found id: ""
	I1205 06:33:41.297452   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.297460   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:41.297471   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:41.297536   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:41.325936   54335 cri.go:89] found id: ""
	I1205 06:33:41.325949   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.325956   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:41.325962   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:41.326036   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:41.354117   54335 cri.go:89] found id: ""
	I1205 06:33:41.354131   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.354138   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:41.354152   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:41.354209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:41.378638   54335 cri.go:89] found id: ""
	I1205 06:33:41.378651   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.378658   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:41.378664   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:41.378720   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:41.407136   54335 cri.go:89] found id: ""
	I1205 06:33:41.407150   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.407157   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:41.407164   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:41.407176   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:41.466362   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:41.466385   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:41.477977   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:41.477993   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:41.544052   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:41.534487   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.535316   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537008   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537377   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.540464   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:41.534487   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.535316   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537008   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537377   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.540464   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:41.544062   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:41.544073   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:41.606455   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:41.606472   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:44.134370   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:44.145440   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:44.145497   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:44.171961   54335 cri.go:89] found id: ""
	I1205 06:33:44.171975   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.171982   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:44.171987   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:44.172046   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:44.197113   54335 cri.go:89] found id: ""
	I1205 06:33:44.197127   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.197134   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:44.197138   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:44.197210   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:44.222364   54335 cri.go:89] found id: ""
	I1205 06:33:44.222378   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.222385   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:44.222390   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:44.222449   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:44.252062   54335 cri.go:89] found id: ""
	I1205 06:33:44.252075   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.252082   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:44.252087   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:44.252143   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:44.277356   54335 cri.go:89] found id: ""
	I1205 06:33:44.277370   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.277377   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:44.277382   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:44.277440   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:44.302126   54335 cri.go:89] found id: ""
	I1205 06:33:44.302139   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.302146   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:44.302151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:44.302214   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:44.326368   54335 cri.go:89] found id: ""
	I1205 06:33:44.326382   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.326389   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:44.326396   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:44.326406   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:44.382509   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:44.382526   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:44.393060   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:44.393075   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:44.454175   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:44.446190   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.447001   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.448578   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.449122   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.450707   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:44.446190   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.447001   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.448578   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.449122   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.450707   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:44.454185   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:44.454195   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:44.516835   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:44.516854   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:47.045086   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:47.055463   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:47.055525   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:47.080371   54335 cri.go:89] found id: ""
	I1205 06:33:47.080384   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.080391   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:47.080396   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:47.080458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:47.119514   54335 cri.go:89] found id: ""
	I1205 06:33:47.119527   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.119535   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:47.119539   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:47.119594   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:47.147444   54335 cri.go:89] found id: ""
	I1205 06:33:47.147457   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.147464   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:47.147469   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:47.147523   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:47.177712   54335 cri.go:89] found id: ""
	I1205 06:33:47.177726   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.177733   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:47.177738   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:47.177800   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:47.202097   54335 cri.go:89] found id: ""
	I1205 06:33:47.202110   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.202118   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:47.202124   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:47.202179   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:47.226333   54335 cri.go:89] found id: ""
	I1205 06:33:47.226347   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.226354   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:47.226359   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:47.226431   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:47.251986   54335 cri.go:89] found id: ""
	I1205 06:33:47.251999   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.252007   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:47.252014   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:47.252025   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:47.308015   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:47.308032   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:47.318805   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:47.318820   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:47.387458   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:47.379184   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.379724   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.381602   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.382334   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.383761   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:47.379184   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.379724   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.381602   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.382334   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.383761   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:47.387468   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:47.387478   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:47.448913   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:47.448930   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:49.981882   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:49.991852   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:49.991908   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:50.023200   54335 cri.go:89] found id: ""
	I1205 06:33:50.023221   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.023229   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:50.023235   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:50.023306   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:50.049577   54335 cri.go:89] found id: ""
	I1205 06:33:50.049591   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.049598   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:50.049604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:50.049665   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:50.078681   54335 cri.go:89] found id: ""
	I1205 06:33:50.078695   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.078703   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:50.078708   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:50.078769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:50.115465   54335 cri.go:89] found id: ""
	I1205 06:33:50.115478   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.115485   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:50.115496   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:50.115554   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:50.146578   54335 cri.go:89] found id: ""
	I1205 06:33:50.146591   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.146598   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:50.146603   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:50.146661   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:50.175515   54335 cri.go:89] found id: ""
	I1205 06:33:50.175528   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.175535   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:50.175541   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:50.175598   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:50.204420   54335 cri.go:89] found id: ""
	I1205 06:33:50.204433   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.204440   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:50.204449   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:50.204458   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:50.258843   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:50.258860   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:50.269324   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:50.269339   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:50.336484   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:50.328749   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.329537   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331160   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331458   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.332932   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:50.328749   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.329537   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331160   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331458   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.332932   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:50.336493   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:50.336515   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:50.399746   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:50.399764   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:52.927181   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:52.937445   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:52.937504   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:52.960934   54335 cri.go:89] found id: ""
	I1205 06:33:52.960947   54335 logs.go:282] 0 containers: []
	W1205 06:33:52.960954   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:52.960960   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:52.961022   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:52.986242   54335 cri.go:89] found id: ""
	I1205 06:33:52.986255   54335 logs.go:282] 0 containers: []
	W1205 06:33:52.986263   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:52.986268   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:52.986327   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:53.013571   54335 cri.go:89] found id: ""
	I1205 06:33:53.013585   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.013592   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:53.013597   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:53.013660   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:53.039257   54335 cri.go:89] found id: ""
	I1205 06:33:53.039271   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.039278   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:53.039284   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:53.039341   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:53.064162   54335 cri.go:89] found id: ""
	I1205 06:33:53.064174   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.064197   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:53.064202   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:53.064259   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:53.090118   54335 cri.go:89] found id: ""
	I1205 06:33:53.090131   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.090138   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:53.090143   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:53.090211   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:53.129452   54335 cri.go:89] found id: ""
	I1205 06:33:53.129464   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.129471   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:53.129478   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:53.129489   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:53.192396   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:53.192413   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:53.203770   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:53.203784   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:53.268406   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:53.260521   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.261282   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.262897   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.263184   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.264650   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:53.260521   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.261282   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.262897   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.263184   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.264650   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:53.268415   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:53.268427   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:53.331135   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:53.331156   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:55.857914   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:55.868426   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:55.868484   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:55.893813   54335 cri.go:89] found id: ""
	I1205 06:33:55.893826   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.893833   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:55.893838   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:55.893898   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:55.917807   54335 cri.go:89] found id: ""
	I1205 06:33:55.917820   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.917827   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:55.917832   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:55.917890   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:55.942437   54335 cri.go:89] found id: ""
	I1205 06:33:55.942450   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.942457   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:55.942462   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:55.942520   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:55.967048   54335 cri.go:89] found id: ""
	I1205 06:33:55.967061   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.967069   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:55.967075   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:55.967134   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:55.995796   54335 cri.go:89] found id: ""
	I1205 06:33:55.995809   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.995817   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:55.995822   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:55.995888   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:56.024165   54335 cri.go:89] found id: ""
	I1205 06:33:56.024179   54335 logs.go:282] 0 containers: []
	W1205 06:33:56.024186   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:56.024192   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:56.024255   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:56.050928   54335 cri.go:89] found id: ""
	I1205 06:33:56.050942   54335 logs.go:282] 0 containers: []
	W1205 06:33:56.050949   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:56.050957   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:56.050966   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:56.108175   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:56.108193   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:56.120521   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:56.120536   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:56.188922   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:56.181151   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.181776   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.183592   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.184091   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.185670   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:56.181151   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.181776   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.183592   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.184091   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.185670   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:56.188933   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:56.188944   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:56.250795   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:56.250813   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:58.783821   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:58.794017   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:58.794077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:58.818887   54335 cri.go:89] found id: ""
	I1205 06:33:58.818900   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.818907   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:58.818913   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:58.818970   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:58.843085   54335 cri.go:89] found id: ""
	I1205 06:33:58.843098   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.843105   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:58.843111   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:58.843173   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:58.873003   54335 cri.go:89] found id: ""
	I1205 06:33:58.873016   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.873024   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:58.873029   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:58.873087   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:58.898773   54335 cri.go:89] found id: ""
	I1205 06:33:58.898786   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.898793   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:58.898799   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:58.898857   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:58.923518   54335 cri.go:89] found id: ""
	I1205 06:33:58.923531   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.923538   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:58.923543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:58.923601   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:58.947602   54335 cri.go:89] found id: ""
	I1205 06:33:58.947615   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.947622   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:58.947627   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:58.947685   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:58.972459   54335 cri.go:89] found id: ""
	I1205 06:33:58.972473   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.972480   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:58.972488   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:58.972499   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:58.983301   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:58.983318   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:59.058445   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:59.050735   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.051328   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053133   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053776   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.054887   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:59.050735   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.051328   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053133   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053776   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.054887   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:59.058455   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:59.058468   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:59.121838   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:59.121859   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:59.153321   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:59.153345   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:01.714396   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:34:01.724655   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:34:01.724715   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:34:01.749246   54335 cri.go:89] found id: ""
	I1205 06:34:01.749259   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.749267   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:34:01.749272   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:34:01.749332   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:34:01.774227   54335 cri.go:89] found id: ""
	I1205 06:34:01.774240   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.774247   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:34:01.774253   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:34:01.774309   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:34:01.799574   54335 cri.go:89] found id: ""
	I1205 06:34:01.799588   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.799595   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:34:01.799600   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:34:01.799659   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:34:01.824994   54335 cri.go:89] found id: ""
	I1205 06:34:01.825008   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.825015   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:34:01.825020   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:34:01.825084   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:34:01.854353   54335 cri.go:89] found id: ""
	I1205 06:34:01.854367   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.854374   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:34:01.854380   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:34:01.854440   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:34:01.880365   54335 cri.go:89] found id: ""
	I1205 06:34:01.880379   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.880386   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:34:01.880392   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:34:01.880458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:34:01.906944   54335 cri.go:89] found id: ""
	I1205 06:34:01.906957   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.906964   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:34:01.906972   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:34:01.906982   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:34:01.938155   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:34:01.938171   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:01.992877   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:34:01.992895   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:34:02.007261   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:34:02.007278   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:34:02.080660   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:34:02.072024   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073018   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073709   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.075294   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.076108   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:34:02.072024   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073018   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073709   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.075294   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.076108   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:34:02.080669   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:34:02.080680   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:34:04.651581   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:34:04.661868   54335 kubeadm.go:602] duration metric: took 4m3.72973724s to restartPrimaryControlPlane
	W1205 06:34:04.661926   54335 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:34:04.661999   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:34:05.076526   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:34:05.090468   54335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:34:05.098831   54335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:34:05.098888   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:34:05.107168   54335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:34:05.107177   54335 kubeadm.go:158] found existing configuration files:
	
	I1205 06:34:05.107230   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:34:05.115256   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:34:05.115315   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:34:05.123163   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:34:05.130789   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:34:05.130850   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:34:05.138646   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:34:05.147024   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:34:05.147082   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:34:05.155378   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:34:05.163928   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:34:05.163985   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:34:05.171609   54335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:34:05.211033   54335 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:34:05.211109   54335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:34:05.279588   54335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:34:05.279653   54335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:34:05.279688   54335 kubeadm.go:319] OS: Linux
	I1205 06:34:05.279731   54335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:34:05.279778   54335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:34:05.279824   54335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:34:05.279876   54335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:34:05.279924   54335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:34:05.279971   54335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:34:05.280015   54335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:34:05.280062   54335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:34:05.280106   54335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:34:05.346565   54335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:34:05.346667   54335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:34:05.346756   54335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:34:05.352620   54335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:34:05.358148   54335 out.go:252]   - Generating certificates and keys ...
	I1205 06:34:05.358236   54335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:34:05.358307   54335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:34:05.358383   54335 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:34:05.358442   54335 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:34:05.358512   54335 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:34:05.358564   54335 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:34:05.358626   54335 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:34:05.358685   54335 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:34:05.358759   54335 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:34:05.358831   54335 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:34:05.358869   54335 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:34:05.358923   54335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:34:05.469895   54335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:34:05.573671   54335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:34:05.924291   54335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:34:06.081184   54335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:34:06.337744   54335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:34:06.338499   54335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:34:06.342999   54335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:34:06.346294   54335 out.go:252]   - Booting up control plane ...
	I1205 06:34:06.346403   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:34:06.346486   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:34:06.347115   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:34:06.367588   54335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:34:06.367869   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:34:06.375582   54335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:34:06.375840   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:34:06.375882   54335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:34:06.509639   54335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:34:06.509751   54335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:38:06.507887   54335 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288295s
	I1205 06:38:06.507910   54335 kubeadm.go:319] 
	I1205 06:38:06.508003   54335 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:38:06.508055   54335 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:38:06.508166   54335 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:38:06.508171   54335 kubeadm.go:319] 
	I1205 06:38:06.508290   54335 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:38:06.508326   54335 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:38:06.508363   54335 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:38:06.508367   54335 kubeadm.go:319] 
	I1205 06:38:06.511849   54335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:38:06.512286   54335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:38:06.512417   54335 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:38:06.512667   54335 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:38:06.512672   54335 kubeadm.go:319] 
	I1205 06:38:06.512746   54335 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 06:38:06.512894   54335 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288295s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:38:06.512983   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:38:06.919674   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:38:06.932797   54335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:38:06.932850   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:38:06.940628   54335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:38:06.940637   54335 kubeadm.go:158] found existing configuration files:
	
	I1205 06:38:06.940686   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:38:06.948311   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:38:06.948364   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:38:06.955656   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:38:06.963182   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:38:06.963234   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:38:06.970398   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:38:06.978024   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:38:06.978085   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:38:06.985044   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:38:06.992736   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:38:06.992788   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:38:07.000057   54335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:38:07.042188   54335 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:38:07.042482   54335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:38:07.116661   54335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:38:07.116719   54335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:38:07.116751   54335 kubeadm.go:319] OS: Linux
	I1205 06:38:07.116792   54335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:38:07.116836   54335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:38:07.116880   54335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:38:07.116923   54335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:38:07.116973   54335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:38:07.117018   54335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:38:07.117060   54335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:38:07.117104   54335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:38:07.117146   54335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:38:07.192664   54335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:38:07.192776   54335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:38:07.192871   54335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:38:07.201632   54335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:38:07.206982   54335 out.go:252]   - Generating certificates and keys ...
	I1205 06:38:07.207075   54335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:38:07.207145   54335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:38:07.207234   54335 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:38:07.207300   54335 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:38:07.207374   54335 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:38:07.207431   54335 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:38:07.207500   54335 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:38:07.207566   54335 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:38:07.207644   54335 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:38:07.207721   54335 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:38:07.207758   54335 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:38:07.207819   54335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:38:07.441757   54335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:38:07.738285   54335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:38:07.865941   54335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:38:08.382979   54335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:38:08.523706   54335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:38:08.524241   54335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:38:08.526890   54335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:38:08.530137   54335 out.go:252]   - Booting up control plane ...
	I1205 06:38:08.530240   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:38:08.530313   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:38:08.530379   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:38:08.552364   54335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:38:08.552467   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:38:08.559742   54335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:38:08.560021   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:38:08.560062   54335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:38:08.679099   54335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:38:08.679206   54335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:42:08.679850   54335 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117292s
	I1205 06:42:08.679871   54335 kubeadm.go:319] 
	I1205 06:42:08.679925   54335 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:42:08.679955   54335 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:42:08.680053   54335 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:42:08.680057   54335 kubeadm.go:319] 
	I1205 06:42:08.680155   54335 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:42:08.680184   54335 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:42:08.680212   54335 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:42:08.680215   54335 kubeadm.go:319] 
	I1205 06:42:08.683507   54335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:42:08.683930   54335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:42:08.684037   54335 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:42:08.684273   54335 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:42:08.684278   54335 kubeadm.go:319] 
	I1205 06:42:08.684346   54335 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:42:08.684393   54335 kubeadm.go:403] duration metric: took 12m7.791636767s to StartCluster
	I1205 06:42:08.684424   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:42:08.684483   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:42:08.708784   54335 cri.go:89] found id: ""
	I1205 06:42:08.708797   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.708804   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:42:08.708809   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:42:08.708865   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:42:08.733583   54335 cri.go:89] found id: ""
	I1205 06:42:08.733596   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.733603   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:42:08.733608   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:42:08.733670   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:42:08.762239   54335 cri.go:89] found id: ""
	I1205 06:42:08.762252   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.762259   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:42:08.762264   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:42:08.762320   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:42:08.785696   54335 cri.go:89] found id: ""
	I1205 06:42:08.785708   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.785715   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:42:08.785734   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:42:08.785790   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:42:08.810075   54335 cri.go:89] found id: ""
	I1205 06:42:08.810088   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.810096   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:42:08.810100   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:42:08.810158   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:42:08.834276   54335 cri.go:89] found id: ""
	I1205 06:42:08.834289   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.834296   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:42:08.834302   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:42:08.834358   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:42:08.858346   54335 cri.go:89] found id: ""
	I1205 06:42:08.858359   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.858366   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:42:08.858374   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:42:08.858383   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:42:08.913473   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:42:08.913490   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:42:08.924092   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:42:08.924108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:42:08.996046   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:42:08.988018   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.988849   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990388   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990679   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.992097   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:42:08.988018   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.988849   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990388   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990679   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.992097   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:42:08.996056   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:42:08.996066   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:42:09.060557   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:42:09.060575   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 06:42:09.093287   54335 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:42:09.093337   54335 out.go:285] * 
	W1205 06:42:09.093398   54335 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:42:09.093427   54335 out.go:285] * 
	W1205 06:42:09.096107   54335 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:42:09.099524   54335 out.go:203] 
	W1205 06:42:09.101056   54335 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:42:09.101108   54335 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:42:09.101134   54335 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:42:09.103029   54335 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145026672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145041688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145095498Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145105836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145128630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145145402Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145253415Z" level=info msg="runtime interface created"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145274027Z" level=info msg="created NRI interface"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145290905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145338700Z" level=info msg="Connect containerd service"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145722270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.146767640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165396800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165459980Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165493022Z" level=info msg="Start subscribing containerd event"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165540539Z" level=info msg="Start recovering state"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192890545Z" level=info msg="Start event monitor"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192942246Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192952470Z" level=info msg="Start streaming server"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192971760Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192981229Z" level=info msg="runtime interface starting up..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192987859Z" level=info msg="starting plugins..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192998526Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:29:59 functional-101526 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.194904270Z" level=info msg="containerd successfully booted in 0.069048s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:42:12.406870   21701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:12.407452   21701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:12.409000   21701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:12.409576   21701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:12.411125   21701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:42:12 up  1:24,  0 user,  load average: 0.30, 0.29, 0.40
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:42:09 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 06:42:10 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:10 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:10 functional-101526 kubelet[21521]: E1205 06:42:10.165309   21521 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 05 06:42:10 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:10 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:10 functional-101526 kubelet[21582]: E1205 06:42:10.898917   21582 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:42:10 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:42:11 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 05 06:42:11 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:11 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:11 functional-101526 kubelet[21618]: E1205 06:42:11.660627   21618 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:42:11 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:42:11 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:42:12 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 05 06:42:12 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:12 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:42:12 functional-101526 kubelet[21706]: E1205 06:42:12.399266   21706 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:42:12 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:42:12 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (328.394572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-101526 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-101526 apply -f testdata/invalidsvc.yaml: exit status 1 (56.52185ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-101526 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-101526 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-101526 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-101526 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-101526 --alsologtostderr -v=1] stderr:
I1205 06:44:08.886925   71524 out.go:360] Setting OutFile to fd 1 ...
I1205 06:44:08.887426   71524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:08.887448   71524 out.go:374] Setting ErrFile to fd 2...
I1205 06:44:08.887464   71524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:08.887736   71524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:44:08.888017   71524 mustload.go:66] Loading cluster: functional-101526
I1205 06:44:08.888492   71524 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:08.889027   71524 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:44:08.909696   71524 host.go:66] Checking if "functional-101526" exists ...
I1205 06:44:08.910030   71524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 06:44:08.967318   71524 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.957305452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1205 06:44:08.967455   71524 api_server.go:166] Checking apiserver status ...
I1205 06:44:08.967524   71524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 06:44:08.967568   71524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:44:08.984336   71524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
W1205 06:44:09.090736   71524 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1205 06:44:09.093819   71524 out.go:179] * The control-plane node functional-101526 apiserver is not running: (state=Stopped)
I1205 06:44:09.096556   71524 out.go:179]   To start a cluster, run: "minikube start -p functional-101526"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (311.820389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-101526 service hello-node --url --format={{.IP}}                                                                                         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ service   │ functional-101526 service hello-node --url                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ ssh       │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001:/mount-9p --alsologtostderr -v=1              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ ssh       │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh -- ls -la /mount-9p                                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh cat /mount-9p/test-1764917039148435049                                                                                        │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh       │ functional-101526 ssh sudo umount -f /mount-9p                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo785096176/001:/mount-9p --alsologtostderr -v=1 --port 46464  │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh       │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh -- ls -la /mount-9p                                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh sudo umount -f /mount-9p                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount1 --alsologtostderr -v=1                  │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount2 --alsologtostderr -v=1                  │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount3 --alsologtostderr -v=1                  │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh       │ functional-101526 ssh findmnt -T /mount1                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh findmnt -T /mount2                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh findmnt -T /mount3                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ mount     │ -p functional-101526 --kill=true                                                                                                                    │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ start     │ -p functional-101526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ start     │ -p functional-101526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ start     │ -p functional-101526 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-101526 --alsologtostderr -v=1                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:44:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:44:08.633583   71453 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:44:08.633762   71453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.633793   71453 out.go:374] Setting ErrFile to fd 2...
	I1205 06:44:08.633814   71453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.634102   71453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:44:08.634499   71453 out.go:368] Setting JSON to false
	I1205 06:44:08.635318   71453 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5196,"bootTime":1764911853,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:44:08.635413   71453 start.go:143] virtualization:  
	I1205 06:44:08.638655   71453 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:44:08.642448   71453 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:44:08.642550   71453 notify.go:221] Checking for updates...
	I1205 06:44:08.648062   71453 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:44:08.651012   71453 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:44:08.653873   71453 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:44:08.656772   71453 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:44:08.659679   71453 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:44:08.662989   71453 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:08.663608   71453 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:44:08.693276   71453 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:44:08.693431   71453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:08.754172   71453 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.744884021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:44:08.754286   71453 docker.go:319] overlay module found
	I1205 06:44:08.759070   71453 out.go:179] * Using the docker driver based on existing profile
	I1205 06:44:08.761929   71453 start.go:309] selected driver: docker
	I1205 06:44:08.761955   71453 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:08.762046   71453 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:44:08.762156   71453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:08.814269   71453 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.805597876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:44:08.814712   71453 cni.go:84] Creating CNI manager for ""
	I1205 06:44:08.814781   71453 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:44:08.814823   71453 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:08.818107   71453 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145026672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145041688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145095498Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145105836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145128630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145145402Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145253415Z" level=info msg="runtime interface created"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145274027Z" level=info msg="created NRI interface"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145290905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145338700Z" level=info msg="Connect containerd service"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145722270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.146767640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165396800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165459980Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165493022Z" level=info msg="Start subscribing containerd event"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165540539Z" level=info msg="Start recovering state"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192890545Z" level=info msg="Start event monitor"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192942246Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192952470Z" level=info msg="Start streaming server"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192971760Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192981229Z" level=info msg="runtime interface starting up..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192987859Z" level=info msg="starting plugins..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192998526Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:29:59 functional-101526 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.194904270Z" level=info msg="containerd successfully booted in 0.069048s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:44:10.151832   23717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:10.152675   23717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:10.154199   23717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:10.154658   23717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:10.156155   23717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:44:10 up  1:26,  0 user,  load average: 0.90, 0.41, 0.42
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:44:06 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:07 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 478.
	Dec 05 06:44:07 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:07 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:07 functional-101526 kubelet[23576]: E1205 06:44:07.409893   23576 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:07 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:07 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 05 06:44:08 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:08 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:08 functional-101526 kubelet[23595]: E1205 06:44:08.159694   23595 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 05 06:44:08 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:08 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:08 functional-101526 kubelet[23603]: E1205 06:44:08.906544   23603 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:09 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 05 06:44:09 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:09 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:09 functional-101526 kubelet[23632]: E1205 06:44:09.666694   23632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:09 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:09 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (301.72574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 status: exit status 2 (298.280545ms)

                                                
                                                
-- stdout --
	functional-101526
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-101526 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (308.126978ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-101526 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 status -o json: exit status 2 (331.418304ms)

                                                
                                                
-- stdout --
	{"Name":"functional-101526","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-101526 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (337.890386ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-101526 addons list -o json                                                                                                              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │ 05 Dec 25 06:43 UTC │
	│ service │ functional-101526 service list                                                                                                                     │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ service │ functional-101526 service list -o json                                                                                                             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ service │ functional-101526 service --namespace=default --https --url hello-node                                                                             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ service │ functional-101526 service hello-node --url --format={{.IP}}                                                                                        │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ service │ functional-101526 service hello-node --url                                                                                                         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ ssh     │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ mount   │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001:/mount-9p --alsologtostderr -v=1             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ ssh     │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh     │ functional-101526 ssh -- ls -la /mount-9p                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh     │ functional-101526 ssh cat /mount-9p/test-1764917039148435049                                                                                       │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh     │ functional-101526 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh     │ functional-101526 ssh sudo umount -f /mount-9p                                                                                                     │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh     │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount   │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo785096176/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh     │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh     │ functional-101526 ssh -- ls -la /mount-9p                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh     │ functional-101526 ssh sudo umount -f /mount-9p                                                                                                     │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount   │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount1 --alsologtostderr -v=1                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount   │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount2 --alsologtostderr -v=1                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount   │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount3 --alsologtostderr -v=1                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh     │ functional-101526 ssh findmnt -T /mount1                                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh     │ functional-101526 ssh findmnt -T /mount2                                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh     │ functional-101526 ssh findmnt -T /mount3                                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ mount   │ -p functional-101526 --kill=true                                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:29:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:29:56.087419   54335 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:29:56.087558   54335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:29:56.087562   54335 out.go:374] Setting ErrFile to fd 2...
	I1205 06:29:56.087566   54335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:29:56.087860   54335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:29:56.088207   54335 out.go:368] Setting JSON to false
	I1205 06:29:56.088971   54335 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4343,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:29:56.089024   54335 start.go:143] virtualization:  
	I1205 06:29:56.093248   54335 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:29:56.096933   54335 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:29:56.097023   54335 notify.go:221] Checking for updates...
	I1205 06:29:56.100720   54335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:29:56.103681   54335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:29:56.106734   54335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:29:56.110260   54335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:29:56.113288   54335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:29:56.116882   54335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:29:56.116976   54335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:29:56.159923   54335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:29:56.160029   54335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:29:56.216532   54335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:29:56.206341969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:29:56.216625   54335 docker.go:319] overlay module found
	I1205 06:29:56.221471   54335 out.go:179] * Using the docker driver based on existing profile
	I1205 06:29:56.224343   54335 start.go:309] selected driver: docker
	I1205 06:29:56.224353   54335 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:29:56.224443   54335 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:29:56.224557   54335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:29:56.277319   54335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:29:56.268438767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:29:56.277800   54335 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:29:56.277821   54335 cni.go:84] Creating CNI manager for ""
	I1205 06:29:56.277884   54335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:29:56.278047   54335 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:29:56.282961   54335 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:29:56.285729   54335 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:29:56.288624   54335 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:29:56.291591   54335 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:29:56.291657   54335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:29:56.310650   54335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:29:56.310660   54335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:29:56.348534   54335 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:29:56.550462   54335 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:29:56.550637   54335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:29:56.550701   54335 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550781   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:29:56.550790   54335 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.262µs
	I1205 06:29:56.550802   54335 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:29:56.550812   54335 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550840   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:29:56.550844   54335 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 33.707µs
	I1205 06:29:56.550849   54335 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550857   54335 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550888   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:29:56.550892   54335 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.93µs
	I1205 06:29:56.550897   54335 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550906   54335 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550932   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:29:56.550937   54335 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:29:56.550939   54335 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.076µs
	I1205 06:29:56.550944   54335 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550952   54335 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550977   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:29:56.550965   54335 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550981   54335 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.187µs
	I1205 06:29:56.550986   54335 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550993   54335 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551016   54335 start.go:364] duration metric: took 28.546µs to acquireMachinesLock for "functional-101526"
	I1205 06:29:56.551022   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:29:56.551025   54335 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.24µs
	I1205 06:29:56.551035   54335 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:29:56.551034   54335 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:29:56.551039   54335 fix.go:54] fixHost starting: 
	I1205 06:29:56.551042   54335 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551065   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:29:56.551069   54335 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.16µs
	I1205 06:29:56.551073   54335 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:29:56.551081   54335 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551103   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:29:56.551106   54335 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 26.888µs
	I1205 06:29:56.551110   54335 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:29:56.551117   54335 cache.go:87] Successfully saved all images to host disk.
	I1205 06:29:56.551339   54335 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:29:56.568156   54335 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:29:56.568181   54335 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:29:56.571582   54335 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:29:56.571608   54335 machine.go:94] provisionDockerMachine start ...
	I1205 06:29:56.571688   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.588675   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.588995   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.589001   54335 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:29:56.736543   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:29:56.736557   54335 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:29:56.736615   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.754489   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.754781   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.754789   54335 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:29:56.915291   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:29:56.915355   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.933044   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.933393   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.933407   54335 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:29:57.085183   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:29:57.085199   54335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:29:57.085221   54335 ubuntu.go:190] setting up certificates
	I1205 06:29:57.085229   54335 provision.go:84] configureAuth start
	I1205 06:29:57.085299   54335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:29:57.101349   54335 provision.go:143] copyHostCerts
	I1205 06:29:57.101410   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:29:57.101421   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:29:57.101492   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:29:57.101592   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:29:57.101596   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:29:57.101621   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:29:57.101678   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:29:57.101680   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:29:57.101703   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:29:57.101750   54335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:29:57.543303   54335 provision.go:177] copyRemoteCerts
	I1205 06:29:57.543357   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:29:57.543409   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.560691   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:57.666006   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:29:57.683446   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:29:57.700645   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:29:57.717863   54335 provision.go:87] duration metric: took 632.597506ms to configureAuth
	I1205 06:29:57.717880   54335 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:29:57.718064   54335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:29:57.718070   54335 machine.go:97] duration metric: took 1.146457487s to provisionDockerMachine
	I1205 06:29:57.718076   54335 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:29:57.718086   54335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:29:57.718137   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:29:57.718174   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.735331   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:57.841496   54335 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:29:57.844702   54335 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:29:57.844721   54335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:29:57.844731   54335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:29:57.844783   54335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:29:57.844859   54335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:29:57.844934   54335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:29:57.844984   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:29:57.852337   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:29:57.869668   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:29:57.887019   54335 start.go:296] duration metric: took 168.92936ms for postStartSetup
	I1205 06:29:57.887102   54335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:29:57.887149   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.903894   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.011756   54335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:29:58.016900   54335 fix.go:56] duration metric: took 1.465853892s for fixHost
	I1205 06:29:58.016919   54335 start.go:83] releasing machines lock for "functional-101526", held for 1.465896107s
	I1205 06:29:58.016988   54335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:29:58.035591   54335 ssh_runner.go:195] Run: cat /version.json
	I1205 06:29:58.035642   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:58.035909   54335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:29:58.035957   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:58.053529   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.058886   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.156784   54335 ssh_runner.go:195] Run: systemctl --version
	I1205 06:29:58.245777   54335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:29:58.249918   54335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:29:58.249974   54335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:29:58.257133   54335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:29:58.257146   54335 start.go:496] detecting cgroup driver to use...
	I1205 06:29:58.257190   54335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:29:58.257233   54335 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:29:58.273979   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:29:58.288748   54335 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:29:58.288814   54335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:29:58.305248   54335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:29:58.319216   54335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:29:58.440307   54335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:29:58.559446   54335 docker.go:234] disabling docker service ...
	I1205 06:29:58.559504   54335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:29:58.574399   54335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:29:58.587407   54335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:29:58.701676   54335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:29:58.808689   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:29:58.821276   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:29:58.836401   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:29:58.846421   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:29:58.855275   54335 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:29:58.855341   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:29:58.864125   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:29:58.872649   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:29:58.881354   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:29:58.890354   54335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:29:58.898337   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:29:58.907106   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:29:58.915882   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:29:58.924414   54335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:29:58.931809   54335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:29:58.939114   54335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:29:59.065680   54335 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:29:59.195981   54335 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:29:59.196040   54335 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:29:59.199987   54335 start.go:564] Will wait 60s for crictl version
	I1205 06:29:59.200039   54335 ssh_runner.go:195] Run: which crictl
	I1205 06:29:59.203560   54335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:29:59.235649   54335 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:29:59.235710   54335 ssh_runner.go:195] Run: containerd --version
	I1205 06:29:59.255405   54335 ssh_runner.go:195] Run: containerd --version
	I1205 06:29:59.283346   54335 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:29:59.286262   54335 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:29:59.301845   54335 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:29:59.308610   54335 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:29:59.311441   54335 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:29:59.311553   54335 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:29:59.311627   54335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:29:59.336067   54335 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:29:59.336079   54335 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:29:59.336085   54335 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:29:59.336175   54335 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:29:59.336232   54335 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:29:59.363378   54335 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:29:59.363395   54335 cni.go:84] Creating CNI manager for ""
	I1205 06:29:59.363403   54335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:29:59.363415   54335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:29:59.363436   54335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:29:59.363559   54335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:29:59.363624   54335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:29:59.371046   54335 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:29:59.371108   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:29:59.378354   54335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:29:59.390503   54335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:29:59.402745   54335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1205 06:29:59.414910   54335 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:29:59.418646   54335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:29:59.529578   54335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:29:59.846402   54335 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:29:59.846413   54335 certs.go:195] generating shared ca certs ...
	I1205 06:29:59.846426   54335 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:29:59.846569   54335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:29:59.846610   54335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:29:59.846616   54335 certs.go:257] generating profile certs ...
	I1205 06:29:59.846728   54335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:29:59.846770   54335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:29:59.846811   54335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:29:59.846921   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:29:59.846956   54335 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:29:59.846962   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:29:59.846989   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:29:59.847014   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:29:59.847036   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:29:59.847085   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:29:59.847736   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:29:59.867939   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:29:59.888562   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:29:59.907283   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:29:59.927879   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:29:59.944224   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:29:59.960459   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:29:59.979078   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:29:59.996293   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:30:00.066962   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:30:00.118991   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:30:00.185989   54335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:30:00.235503   54335 ssh_runner.go:195] Run: openssl version
	I1205 06:30:00.255104   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.270140   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:30:00.290181   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.295705   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.295771   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.399762   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:30:00.412238   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.433387   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:30:00.449934   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.455249   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.455319   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.517764   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:30:00.530824   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.546605   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:30:00.555560   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.561005   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.561068   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.611790   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:30:00.623580   54335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:30:00.628736   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:30:00.674439   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:30:00.717432   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:30:00.760669   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:30:00.802949   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:30:00.845730   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:30:00.892769   54335 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:30:00.892871   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:30:00.892957   54335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:30:00.923464   54335 cri.go:89] found id: ""
	I1205 06:30:00.923530   54335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:30:00.932111   54335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:30:00.932122   54335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:30:00.932182   54335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:30:00.940210   54335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:00.940808   54335 kubeconfig.go:125] found "functional-101526" server: "https://192.168.49.2:8441"
	I1205 06:30:00.942221   54335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:30:00.951085   54335 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:15:26.552544518 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:29:59.409281720 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:30:00.951105   54335 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:30:00.951116   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1205 06:30:00.951177   54335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:30:00.983535   54335 cri.go:89] found id: ""
	I1205 06:30:00.983600   54335 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:30:00.999793   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:30:01.011193   54335 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  5 06:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5628 Dec  5 06:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  5 06:19 /etc/kubernetes/scheduler.conf
	
	I1205 06:30:01.011277   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:30:01.020421   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:30:01.029014   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.029083   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:30:01.037495   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:30:01.045879   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.045943   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:30:01.054299   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:30:01.063067   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.063128   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:30:01.071319   54335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:30:01.080035   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:01.126871   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.550689   54335 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.423791138s)
	I1205 06:30:02.550750   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.758304   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.826924   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.872904   54335 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:30:02.872975   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:03.373516   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:03.873269   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:04.373873   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:04.873262   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:05.374099   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:05.873790   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:06.374013   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:06.873783   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:07.373319   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:07.874006   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:08.374019   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:08.873288   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:09.373772   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:09.873842   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:10.373300   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:10.874107   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:11.373177   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:11.873355   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:12.373736   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:12.873308   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:13.374049   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:13.873112   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:14.374044   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:14.873826   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:15.373350   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:15.873570   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:16.373205   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:16.873133   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:17.373949   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:17.873343   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:18.373376   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:18.873437   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:19.373102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:19.874076   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:20.373694   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:20.873676   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:21.373293   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:21.873915   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:22.373279   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:22.873197   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:23.373182   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:23.873041   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:24.373194   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:24.873913   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:25.373334   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:25.874011   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:26.373620   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:26.873898   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:27.373174   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:27.874034   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:28.373282   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:28.873430   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:29.374096   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:29.873271   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:30.373863   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:30.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:31.373041   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:31.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:32.373311   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:32.873944   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:33.373660   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:33.873399   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:34.373269   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:34.873154   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:35.374056   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:35.873925   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:36.373314   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:36.873816   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:37.373079   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:37.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:38.373973   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:38.873278   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:39.373892   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:39.873395   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:40.373274   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:40.874009   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:41.374054   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:41.873330   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:42.373986   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:42.873130   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:43.373582   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:43.873189   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:44.373894   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:44.873102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:45.373202   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:45.873349   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:46.373273   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:46.873147   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:47.374102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:47.873856   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:48.374059   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:48.873728   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:49.373337   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:49.873152   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:50.373886   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:50.873110   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:51.373740   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:51.873807   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:52.373287   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:52.873175   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:53.373983   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:53.873898   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:54.374080   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:54.873113   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:55.373274   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:55.874004   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:56.373964   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:56.873273   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:57.373188   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:57.873857   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:58.373297   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:58.873189   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:59.373797   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:59.874078   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:00.374118   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:00.873073   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:01.373094   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:01.873990   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:02.373960   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:02.873246   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:02.873355   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:02.899119   54335 cri.go:89] found id: ""
	I1205 06:31:02.899133   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.899140   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:02.899145   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:02.899201   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:02.926015   54335 cri.go:89] found id: ""
	I1205 06:31:02.926028   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.926036   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:02.926041   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:02.926100   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:02.950775   54335 cri.go:89] found id: ""
	I1205 06:31:02.950788   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.950795   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:02.950800   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:02.950859   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:02.978268   54335 cri.go:89] found id: ""
	I1205 06:31:02.978282   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.978289   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:02.978294   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:02.978352   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:03.015482   54335 cri.go:89] found id: ""
	I1205 06:31:03.015497   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.015506   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:03.015511   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:03.015575   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:03.041353   54335 cri.go:89] found id: ""
	I1205 06:31:03.041366   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.041373   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:03.041379   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:03.041463   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:03.066457   54335 cri.go:89] found id: ""
	I1205 06:31:03.066472   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.066479   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:03.066487   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:03.066502   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:03.121069   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:03.121087   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:03.131794   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:03.131809   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:03.195836   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:03.188092   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.188541   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190139   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190560   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.191959   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:03.188092   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.188541   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190139   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190560   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.191959   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:03.195847   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:03.195859   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:03.258177   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:03.258195   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:05.785947   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:05.795932   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:05.795992   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:05.822996   54335 cri.go:89] found id: ""
	I1205 06:31:05.823010   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.823017   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:05.823022   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:05.823079   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:05.851647   54335 cri.go:89] found id: ""
	I1205 06:31:05.851660   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.851667   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:05.851671   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:05.851728   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:05.888840   54335 cri.go:89] found id: ""
	I1205 06:31:05.888853   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.888860   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:05.888865   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:05.888923   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:05.916749   54335 cri.go:89] found id: ""
	I1205 06:31:05.916763   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.916771   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:05.916776   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:05.916838   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:05.941885   54335 cri.go:89] found id: ""
	I1205 06:31:05.941898   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.941905   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:05.941910   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:05.941970   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:05.967174   54335 cri.go:89] found id: ""
	I1205 06:31:05.967188   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.967195   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:05.967202   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:05.967259   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:05.991608   54335 cri.go:89] found id: ""
	I1205 06:31:05.991622   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.991629   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:05.991637   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:05.991647   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:06.048885   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:06.048907   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:06.060386   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:06.060403   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:06.139830   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:06.132213   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.132764   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134526   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134986   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.136558   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:06.132213   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.132764   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134526   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134986   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.136558   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:06.139840   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:06.139853   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:06.202288   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:06.202307   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:08.730029   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:08.740211   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:08.740272   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:08.763977   54335 cri.go:89] found id: ""
	I1205 06:31:08.763991   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.763998   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:08.764004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:08.764064   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:08.788621   54335 cri.go:89] found id: ""
	I1205 06:31:08.788635   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.788642   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:08.788647   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:08.788702   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:08.813441   54335 cri.go:89] found id: ""
	I1205 06:31:08.813454   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.813461   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:08.813466   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:08.813522   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:08.837930   54335 cri.go:89] found id: ""
	I1205 06:31:08.837944   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.837951   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:08.837956   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:08.838014   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:08.865898   54335 cri.go:89] found id: ""
	I1205 06:31:08.865911   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.865918   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:08.865923   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:08.865985   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:08.893385   54335 cri.go:89] found id: ""
	I1205 06:31:08.893410   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.893417   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:08.893422   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:08.893488   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:08.922394   54335 cri.go:89] found id: ""
	I1205 06:31:08.922407   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.922414   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:08.922422   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:08.922432   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:08.977895   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:08.977913   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:08.989011   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:08.989025   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:09.057444   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:09.048642   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.049814   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.051664   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.052030   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.053581   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:09.048642   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.049814   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.051664   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.052030   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.053581   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:09.057456   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:09.057471   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:09.119855   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:09.119875   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:11.657869   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:11.668122   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:11.668185   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:11.692170   54335 cri.go:89] found id: ""
	I1205 06:31:11.692183   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.692190   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:11.692195   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:11.692253   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:11.716930   54335 cri.go:89] found id: ""
	I1205 06:31:11.716945   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.716951   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:11.716962   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:11.717031   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:11.741795   54335 cri.go:89] found id: ""
	I1205 06:31:11.741808   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.741815   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:11.741820   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:11.741881   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:11.766411   54335 cri.go:89] found id: ""
	I1205 06:31:11.766425   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.766431   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:11.766437   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:11.766495   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:11.791195   54335 cri.go:89] found id: ""
	I1205 06:31:11.791209   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.791216   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:11.791221   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:11.791280   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:11.819219   54335 cri.go:89] found id: ""
	I1205 06:31:11.819233   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.819245   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:11.819251   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:11.819312   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:11.851464   54335 cri.go:89] found id: ""
	I1205 06:31:11.851478   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.851491   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:11.851498   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:11.851508   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:11.931606   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:11.931625   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:11.960389   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:11.960407   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:12.021080   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:12.021102   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:12.032273   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:12.032290   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:12.097324   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:12.088793   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.089496   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091075   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091390   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.093729   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:12.088793   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.089496   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091075   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091390   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.093729   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:14.597581   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:14.607724   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:14.607782   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:14.632907   54335 cri.go:89] found id: ""
	I1205 06:31:14.632921   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.632928   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:14.632933   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:14.632989   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:14.657884   54335 cri.go:89] found id: ""
	I1205 06:31:14.657898   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.657905   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:14.657910   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:14.657965   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:14.681364   54335 cri.go:89] found id: ""
	I1205 06:31:14.681377   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.681384   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:14.681389   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:14.681462   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:14.709552   54335 cri.go:89] found id: ""
	I1205 06:31:14.709566   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.709573   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:14.709578   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:14.709642   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:14.733105   54335 cri.go:89] found id: ""
	I1205 06:31:14.733118   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.733125   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:14.733130   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:14.733217   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:14.759861   54335 cri.go:89] found id: ""
	I1205 06:31:14.759874   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.759881   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:14.759887   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:14.759943   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:14.785666   54335 cri.go:89] found id: ""
	I1205 06:31:14.785679   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.785686   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:14.785693   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:14.785706   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:14.854767   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:14.841994   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.842592   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844142   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844598   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.846116   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:14.841994   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.842592   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844142   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844598   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.846116   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:14.854785   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:14.854795   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:14.922701   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:14.922719   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:14.953207   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:14.953223   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:15.010462   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:15.010484   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:17.529572   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:17.539788   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:17.539847   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:17.563677   54335 cri.go:89] found id: ""
	I1205 06:31:17.563691   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.563698   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:17.563703   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:17.563774   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:17.593628   54335 cri.go:89] found id: ""
	I1205 06:31:17.593642   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.593649   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:17.593654   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:17.593720   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:17.619071   54335 cri.go:89] found id: ""
	I1205 06:31:17.619084   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.619092   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:17.619097   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:17.619153   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:17.642944   54335 cri.go:89] found id: ""
	I1205 06:31:17.642958   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.642964   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:17.642970   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:17.643037   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:17.667755   54335 cri.go:89] found id: ""
	I1205 06:31:17.667768   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.667775   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:17.667780   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:17.667836   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:17.691060   54335 cri.go:89] found id: ""
	I1205 06:31:17.691073   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.691080   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:17.691085   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:17.691152   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:17.714527   54335 cri.go:89] found id: ""
	I1205 06:31:17.714540   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.714547   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:17.714554   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:17.714564   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:17.777347   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:17.777365   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:17.804848   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:17.804862   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:17.866054   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:17.866072   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:17.877290   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:17.877305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:17.944157   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:17.936780   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.937336   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939068   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939357   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.940820   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:17.936780   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.937336   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939068   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939357   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.940820   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:20.445814   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:20.455929   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:20.456007   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:20.480265   54335 cri.go:89] found id: ""
	I1205 06:31:20.480280   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.480287   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:20.480294   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:20.480371   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:20.504045   54335 cri.go:89] found id: ""
	I1205 06:31:20.504059   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.504065   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:20.504070   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:20.504128   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:20.528811   54335 cri.go:89] found id: ""
	I1205 06:31:20.528824   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.528831   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:20.528836   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:20.528893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:20.553249   54335 cri.go:89] found id: ""
	I1205 06:31:20.553272   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.553279   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:20.553284   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:20.553358   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:20.577735   54335 cri.go:89] found id: ""
	I1205 06:31:20.577767   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.577775   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:20.577780   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:20.577839   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:20.603821   54335 cri.go:89] found id: ""
	I1205 06:31:20.603835   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.603852   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:20.603858   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:20.603955   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:20.632954   54335 cri.go:89] found id: ""
	I1205 06:31:20.632985   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.632992   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:20.633000   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:20.633010   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:20.688822   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:20.688840   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:20.700167   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:20.700183   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:20.766199   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:20.757515   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.758089   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760039   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760823   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.762597   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:20.757515   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.758089   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760039   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760823   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.762597   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:20.766209   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:20.766219   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:20.829413   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:20.829439   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:23.369036   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:23.379250   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:23.379308   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:23.407254   54335 cri.go:89] found id: ""
	I1205 06:31:23.407268   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.407275   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:23.407280   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:23.407335   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:23.431989   54335 cri.go:89] found id: ""
	I1205 06:31:23.432002   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.432009   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:23.432014   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:23.432079   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:23.467269   54335 cri.go:89] found id: ""
	I1205 06:31:23.467287   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.467293   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:23.467299   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:23.467362   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:23.490943   54335 cri.go:89] found id: ""
	I1205 06:31:23.490956   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.490962   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:23.490968   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:23.491025   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:23.519217   54335 cri.go:89] found id: ""
	I1205 06:31:23.519232   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.519239   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:23.519244   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:23.519306   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:23.543863   54335 cri.go:89] found id: ""
	I1205 06:31:23.543877   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.543883   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:23.543888   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:23.543956   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:23.567865   54335 cri.go:89] found id: ""
	I1205 06:31:23.567878   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.567897   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:23.567905   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:23.567914   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:23.632509   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:23.632529   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:23.662290   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:23.662305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:23.719254   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:23.719272   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:23.730331   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:23.730346   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:23.792133   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:23.784315   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.784953   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.786670   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.787328   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.788816   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:23.784315   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.784953   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.786670   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.787328   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.788816   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:26.293128   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:26.304108   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:26.304168   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:26.331011   54335 cri.go:89] found id: ""
	I1205 06:31:26.331024   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.331031   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:26.331040   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:26.331097   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:26.358547   54335 cri.go:89] found id: ""
	I1205 06:31:26.358562   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.358569   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:26.358573   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:26.358630   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:26.387125   54335 cri.go:89] found id: ""
	I1205 06:31:26.387139   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.387146   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:26.387151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:26.387210   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:26.412329   54335 cri.go:89] found id: ""
	I1205 06:31:26.412343   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.412350   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:26.412355   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:26.412433   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:26.437117   54335 cri.go:89] found id: ""
	I1205 06:31:26.437130   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.437138   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:26.437142   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:26.437253   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:26.465767   54335 cri.go:89] found id: ""
	I1205 06:31:26.465779   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.465787   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:26.465792   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:26.465855   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:26.489618   54335 cri.go:89] found id: ""
	I1205 06:31:26.489636   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.489643   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:26.489651   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:26.489661   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:26.516285   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:26.516307   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:26.571623   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:26.571639   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:26.582532   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:26.582547   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:26.648629   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:26.640184   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.640930   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.642740   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.643413   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.644996   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:26.640184   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.640930   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.642740   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.643413   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.644996   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:26.648640   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:26.648652   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:29.213295   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:29.223226   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:29.223291   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:29.248501   54335 cri.go:89] found id: ""
	I1205 06:31:29.248514   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.248521   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:29.248526   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:29.248585   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:29.273551   54335 cri.go:89] found id: ""
	I1205 06:31:29.273564   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.273571   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:29.273576   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:29.273633   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:29.297959   54335 cri.go:89] found id: ""
	I1205 06:31:29.297972   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.297979   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:29.297985   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:29.298043   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:29.322784   54335 cri.go:89] found id: ""
	I1205 06:31:29.322798   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.322809   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:29.322814   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:29.322870   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:29.351067   54335 cri.go:89] found id: ""
	I1205 06:31:29.351080   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.351087   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:29.351092   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:29.351163   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:29.378768   54335 cri.go:89] found id: ""
	I1205 06:31:29.378782   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.378789   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:29.378794   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:29.378854   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:29.403528   54335 cri.go:89] found id: ""
	I1205 06:31:29.403542   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.403549   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:29.403556   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:29.403567   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:29.471248   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:29.463937   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.464521   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466184   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466622   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.467929   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:29.463937   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.464521   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466184   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466622   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.467929   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:29.471259   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:29.471269   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:29.533062   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:29.533080   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:29.564293   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:29.564323   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:29.619083   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:29.619101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:32.130510   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:32.143539   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:32.143642   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:32.171414   54335 cri.go:89] found id: ""
	I1205 06:31:32.171428   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.171436   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:32.171441   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:32.171499   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:32.196112   54335 cri.go:89] found id: ""
	I1205 06:31:32.196125   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.196132   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:32.196137   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:32.196195   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:32.223236   54335 cri.go:89] found id: ""
	I1205 06:31:32.223250   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.223257   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:32.223261   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:32.223317   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:32.247226   54335 cri.go:89] found id: ""
	I1205 06:31:32.247240   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.247247   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:32.247252   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:32.247308   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:32.275892   54335 cri.go:89] found id: ""
	I1205 06:31:32.275905   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.275912   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:32.275918   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:32.275975   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:32.304746   54335 cri.go:89] found id: ""
	I1205 06:31:32.304759   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.304767   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:32.304772   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:32.304831   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:32.329065   54335 cri.go:89] found id: ""
	I1205 06:31:32.329078   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.329085   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:32.329092   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:32.329101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:32.384331   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:32.384349   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:32.395108   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:32.395123   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:32.457079   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:32.449726   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.450348   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.451857   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.452270   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.453749   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:32.449726   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.450348   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.451857   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.452270   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.453749   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:32.457097   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:32.457108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:32.520612   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:32.520631   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:35.049835   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:35.059785   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:35.059850   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:35.084594   54335 cri.go:89] found id: ""
	I1205 06:31:35.084610   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.084617   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:35.084624   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:35.084682   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:35.119519   54335 cri.go:89] found id: ""
	I1205 06:31:35.119533   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.119553   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:35.119559   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:35.119625   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:35.146284   54335 cri.go:89] found id: ""
	I1205 06:31:35.146298   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.146305   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:35.146310   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:35.146370   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:35.174570   54335 cri.go:89] found id: ""
	I1205 06:31:35.174583   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.174590   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:35.174596   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:35.174653   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:35.198347   54335 cri.go:89] found id: ""
	I1205 06:31:35.198361   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.198368   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:35.198374   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:35.198430   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:35.226196   54335 cri.go:89] found id: ""
	I1205 06:31:35.226210   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.226216   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:35.226222   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:35.226281   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:35.250876   54335 cri.go:89] found id: ""
	I1205 06:31:35.250889   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.250897   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:35.250904   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:35.250913   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:35.304930   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:35.304948   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:35.315954   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:35.315970   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:35.377099   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:35.369290   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.369826   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371500   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371964   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.373533   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:35.369290   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.369826   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371500   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371964   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.373533   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:35.377109   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:35.377120   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:35.437784   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:35.437801   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:37.968228   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:37.977892   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:37.977968   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:38.010142   54335 cri.go:89] found id: ""
	I1205 06:31:38.010158   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.010173   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:38.010180   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:38.010249   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:38.048020   54335 cri.go:89] found id: ""
	I1205 06:31:38.048034   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.048041   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:38.048047   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:38.048112   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:38.077977   54335 cri.go:89] found id: ""
	I1205 06:31:38.077991   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.077999   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:38.078004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:38.078068   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:38.115520   54335 cri.go:89] found id: ""
	I1205 06:31:38.115534   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.115541   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:38.115546   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:38.115618   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:38.141580   54335 cri.go:89] found id: ""
	I1205 06:31:38.141593   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.141613   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:38.141618   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:38.141673   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:38.167473   54335 cri.go:89] found id: ""
	I1205 06:31:38.167487   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.167493   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:38.167499   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:38.167565   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:38.190856   54335 cri.go:89] found id: ""
	I1205 06:31:38.190869   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.190876   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:38.190884   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:38.190894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:38.245488   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:38.245505   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:38.255819   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:38.255834   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:38.319935   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:38.311836   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.312540   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314137   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314745   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.316388   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:38.311836   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.312540   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314137   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314745   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.316388   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:38.319952   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:38.319963   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:38.381733   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:38.381750   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:40.911397   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:40.921257   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:40.921321   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:40.947604   54335 cri.go:89] found id: ""
	I1205 06:31:40.947618   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.947625   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:40.947630   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:40.947694   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:40.973136   54335 cri.go:89] found id: ""
	I1205 06:31:40.973148   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.973186   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:40.973191   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:40.973256   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:40.996412   54335 cri.go:89] found id: ""
	I1205 06:31:40.996425   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.996432   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:40.996437   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:40.996497   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:41.024001   54335 cri.go:89] found id: ""
	I1205 06:31:41.024015   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.024022   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:41.024028   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:41.024086   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:41.051496   54335 cri.go:89] found id: ""
	I1205 06:31:41.051510   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.051517   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:41.051522   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:41.051582   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:41.080451   54335 cri.go:89] found id: ""
	I1205 06:31:41.080464   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.080471   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:41.080476   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:41.080533   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:41.117388   54335 cri.go:89] found id: ""
	I1205 06:31:41.117401   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.117409   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:41.117416   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:41.117426   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:41.182349   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:41.182368   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:41.193093   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:41.193108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:41.254159   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:41.246911   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.247523   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249025   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249503   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.250928   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:41.246911   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.247523   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249025   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249503   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.250928   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:41.254170   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:41.254181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:41.321082   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:41.321101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:43.851964   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:43.862187   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:43.862247   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:43.886923   54335 cri.go:89] found id: ""
	I1205 06:31:43.886937   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.886944   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:43.886950   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:43.887009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:43.912496   54335 cri.go:89] found id: ""
	I1205 06:31:43.912509   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.912516   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:43.912521   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:43.912579   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:43.936914   54335 cri.go:89] found id: ""
	I1205 06:31:43.936928   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.936938   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:43.936943   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:43.937000   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:43.961282   54335 cri.go:89] found id: ""
	I1205 06:31:43.961297   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.961304   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:43.961314   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:43.961378   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:43.988380   54335 cri.go:89] found id: ""
	I1205 06:31:43.988394   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.988401   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:43.988406   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:43.988464   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:44.020415   54335 cri.go:89] found id: ""
	I1205 06:31:44.020429   54335 logs.go:282] 0 containers: []
	W1205 06:31:44.020437   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:44.020442   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:44.020501   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:44.045852   54335 cri.go:89] found id: ""
	I1205 06:31:44.045866   54335 logs.go:282] 0 containers: []
	W1205 06:31:44.045873   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:44.045881   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:44.045894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:44.056666   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:44.056681   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:44.135868   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:44.126530   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.127194   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129371   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129954   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.131639   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:44.126530   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.127194   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129371   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129954   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.131639   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:44.135879   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:44.135890   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:44.204481   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:44.204500   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:44.232917   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:44.232935   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:46.789779   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:46.799818   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:46.799875   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:46.823971   54335 cri.go:89] found id: ""
	I1205 06:31:46.823985   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.823992   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:46.823998   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:46.824061   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:46.848342   54335 cri.go:89] found id: ""
	I1205 06:31:46.848356   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.848363   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:46.848368   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:46.848425   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:46.873786   54335 cri.go:89] found id: ""
	I1205 06:31:46.873800   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.873807   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:46.873812   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:46.873873   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:46.903465   54335 cri.go:89] found id: ""
	I1205 06:31:46.903479   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.903487   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:46.903492   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:46.903549   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:46.932432   54335 cri.go:89] found id: ""
	I1205 06:31:46.932446   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.932453   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:46.932458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:46.932518   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:46.957671   54335 cri.go:89] found id: ""
	I1205 06:31:46.957684   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.957692   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:46.957697   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:46.957760   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:46.983050   54335 cri.go:89] found id: ""
	I1205 06:31:46.983063   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.983077   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:46.983085   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:46.983095   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:47.042088   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:47.042105   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:47.053482   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:47.053498   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:47.131108   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:47.122748   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.123420   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125206   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125739   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.127319   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:47.122748   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.123420   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125206   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125739   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.127319   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:47.131117   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:47.131128   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:47.204434   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:47.204452   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:49.735640   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:49.745807   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:49.745868   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:49.770984   54335 cri.go:89] found id: ""
	I1205 06:31:49.770997   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.771004   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:49.771009   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:49.771072   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:49.795524   54335 cri.go:89] found id: ""
	I1205 06:31:49.795538   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.795545   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:49.795550   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:49.795605   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:49.820126   54335 cri.go:89] found id: ""
	I1205 06:31:49.820140   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.820147   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:49.820152   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:49.820209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:49.844379   54335 cri.go:89] found id: ""
	I1205 06:31:49.844392   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.844401   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:49.844408   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:49.844465   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:49.871132   54335 cri.go:89] found id: ""
	I1205 06:31:49.871144   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.871152   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:49.871157   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:49.871214   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:49.894867   54335 cri.go:89] found id: ""
	I1205 06:31:49.894880   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.894887   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:49.894893   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:49.894949   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:49.920144   54335 cri.go:89] found id: ""
	I1205 06:31:49.920157   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.920164   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:49.920171   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:49.920181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:49.979573   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:49.979595   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:49.990405   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:49.990420   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:50.061353   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:50.052917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.053917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.055577   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.056112   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.057715   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:50.052917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.053917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.055577   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.056112   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.057715   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:50.061364   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:50.061376   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:50.139097   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:50.139131   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:52.678459   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:52.688604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:52.688663   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:52.712686   54335 cri.go:89] found id: ""
	I1205 06:31:52.712700   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.712707   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:52.712712   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:52.712774   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:52.746954   54335 cri.go:89] found id: ""
	I1205 06:31:52.746968   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.746975   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:52.746980   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:52.747039   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:52.771325   54335 cri.go:89] found id: ""
	I1205 06:31:52.771338   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.771345   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:52.771350   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:52.771406   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:52.795882   54335 cri.go:89] found id: ""
	I1205 06:31:52.795896   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.795902   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:52.795908   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:52.795965   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:52.820064   54335 cri.go:89] found id: ""
	I1205 06:31:52.820079   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.820085   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:52.820090   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:52.820150   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:52.848297   54335 cri.go:89] found id: ""
	I1205 06:31:52.848311   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.848317   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:52.848323   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:52.848381   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:52.876041   54335 cri.go:89] found id: ""
	I1205 06:31:52.876055   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.876062   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:52.876069   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:52.876079   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:52.931790   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:52.931811   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:52.942929   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:52.942944   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:53.007664   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:52.997863   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:52.998579   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.000280   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.001013   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.002974   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:52.997863   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:52.998579   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.000280   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.001013   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.002974   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:53.007675   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:53.007686   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:53.073695   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:53.073712   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:55.610763   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:55.620883   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:55.620945   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:55.645677   54335 cri.go:89] found id: ""
	I1205 06:31:55.645691   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.645698   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:55.645703   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:55.645763   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:55.670962   54335 cri.go:89] found id: ""
	I1205 06:31:55.670975   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.670982   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:55.670987   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:55.671045   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:55.695354   54335 cri.go:89] found id: ""
	I1205 06:31:55.695367   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.695374   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:55.695379   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:55.695447   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:55.719264   54335 cri.go:89] found id: ""
	I1205 06:31:55.719277   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.719284   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:55.719290   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:55.719347   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:55.742928   54335 cri.go:89] found id: ""
	I1205 06:31:55.742941   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.742948   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:55.742954   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:55.743013   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:55.766643   54335 cri.go:89] found id: ""
	I1205 06:31:55.766657   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.766664   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:55.766672   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:55.766729   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:55.789985   54335 cri.go:89] found id: ""
	I1205 06:31:55.789999   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.790005   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:55.790051   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:55.790062   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:55.817984   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:55.818000   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:55.874068   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:55.874085   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:55.885873   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:55.885888   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:55.950375   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:55.941637   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.942508   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.944636   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.945456   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.946344   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:55.941637   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.942508   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.944636   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.945456   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.946344   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:55.950385   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:55.950396   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:58.513319   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:58.523187   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:58.523244   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:58.546403   54335 cri.go:89] found id: ""
	I1205 06:31:58.546416   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.546423   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:58.546429   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:58.546486   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:58.570005   54335 cri.go:89] found id: ""
	I1205 06:31:58.570019   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.570035   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:58.570040   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:58.570098   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:58.594200   54335 cri.go:89] found id: ""
	I1205 06:31:58.594214   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.594220   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:58.594225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:58.594284   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:58.618421   54335 cri.go:89] found id: ""
	I1205 06:31:58.618434   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.618440   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:58.618445   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:58.618499   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:58.642656   54335 cri.go:89] found id: ""
	I1205 06:31:58.642669   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.642676   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:58.642682   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:58.642742   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:58.667838   54335 cri.go:89] found id: ""
	I1205 06:31:58.667850   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.667858   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:58.667863   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:58.667933   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:58.695900   54335 cri.go:89] found id: ""
	I1205 06:31:58.695914   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.695921   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:58.695929   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:58.695939   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:58.751191   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:58.751209   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:58.761861   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:58.761882   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:58.829503   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:58.822607   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.823005   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.824699   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.825076   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.826213   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:58.822607   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.823005   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.824699   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.825076   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.826213   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:58.829513   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:58.829524   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:58.892286   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:58.892304   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:01.420326   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:01.430350   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:01.430415   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:01.455307   54335 cri.go:89] found id: ""
	I1205 06:32:01.455320   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.455328   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:01.455333   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:01.455388   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:01.479758   54335 cri.go:89] found id: ""
	I1205 06:32:01.479771   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.479778   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:01.479784   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:01.479840   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:01.502828   54335 cri.go:89] found id: ""
	I1205 06:32:01.502841   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.502848   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:01.502853   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:01.502908   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:01.528675   54335 cri.go:89] found id: ""
	I1205 06:32:01.528688   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.528698   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:01.528704   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:01.528762   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:01.553405   54335 cri.go:89] found id: ""
	I1205 06:32:01.553419   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.553426   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:01.553431   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:01.553510   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:01.578373   54335 cri.go:89] found id: ""
	I1205 06:32:01.578387   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.578394   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:01.578400   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:01.578464   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:01.603666   54335 cri.go:89] found id: ""
	I1205 06:32:01.603689   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.603697   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:01.603704   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:01.603714   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:01.661152   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:01.661181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:01.672814   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:01.672831   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:01.736722   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:01.729093   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.729657   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731235   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731803   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.733404   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:01.729093   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.729657   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731235   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731803   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.733404   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:01.736731   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:01.736742   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:01.799762   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:01.799780   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:04.328972   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:04.339381   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:04.339441   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:04.365391   54335 cri.go:89] found id: ""
	I1205 06:32:04.365405   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.365412   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:04.365418   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:04.365487   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:04.395556   54335 cri.go:89] found id: ""
	I1205 06:32:04.395570   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.395577   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:04.395582   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:04.395640   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:04.425328   54335 cri.go:89] found id: ""
	I1205 06:32:04.425341   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.425348   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:04.425354   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:04.425420   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:04.450514   54335 cri.go:89] found id: ""
	I1205 06:32:04.450528   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.450536   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:04.450541   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:04.450604   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:04.479372   54335 cri.go:89] found id: ""
	I1205 06:32:04.479386   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.479393   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:04.479398   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:04.479459   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:04.504452   54335 cri.go:89] found id: ""
	I1205 06:32:04.504466   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.504473   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:04.504479   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:04.504539   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:04.529609   54335 cri.go:89] found id: ""
	I1205 06:32:04.529622   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.529629   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:04.529637   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:04.529649   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:04.584301   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:04.584319   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:04.595557   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:04.595572   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:04.660266   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:04.651668   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.652518   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654083   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654557   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.656089   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:04.651668   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.652518   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654083   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654557   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.656089   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:04.660277   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:04.660288   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:04.723098   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:04.723115   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:07.257738   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:07.268081   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:07.268144   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:07.292559   54335 cri.go:89] found id: ""
	I1205 06:32:07.292573   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.292580   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:07.292585   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:07.292645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:07.316782   54335 cri.go:89] found id: ""
	I1205 06:32:07.316796   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.316803   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:07.316809   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:07.316869   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:07.346176   54335 cri.go:89] found id: ""
	I1205 06:32:07.346189   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.346196   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:07.346201   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:07.346263   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:07.378787   54335 cri.go:89] found id: ""
	I1205 06:32:07.378800   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.378807   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:07.378812   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:07.378869   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:07.406652   54335 cri.go:89] found id: ""
	I1205 06:32:07.406666   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.406673   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:07.406678   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:07.406746   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:07.438624   54335 cri.go:89] found id: ""
	I1205 06:32:07.438642   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.438649   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:07.438655   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:07.438726   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:07.464230   54335 cri.go:89] found id: ""
	I1205 06:32:07.464243   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.464250   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:07.464257   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:07.464266   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:07.520945   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:07.520962   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:07.531896   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:07.531911   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:07.598302   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:07.588821   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.589395   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591106   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591684   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.594475   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:07.588821   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.589395   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591106   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591684   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.594475   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:07.598317   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:07.598327   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:07.661122   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:07.661139   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:10.190348   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:10.201225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:10.201307   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:10.230433   54335 cri.go:89] found id: ""
	I1205 06:32:10.230446   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.230453   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:10.230458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:10.230512   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:10.254051   54335 cri.go:89] found id: ""
	I1205 06:32:10.254070   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.254077   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:10.254082   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:10.254140   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:10.278518   54335 cri.go:89] found id: ""
	I1205 06:32:10.278531   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.278538   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:10.278543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:10.278599   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:10.302979   54335 cri.go:89] found id: ""
	I1205 06:32:10.302992   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.302999   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:10.303004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:10.303059   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:10.331316   54335 cri.go:89] found id: ""
	I1205 06:32:10.331330   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.331337   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:10.331341   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:10.331400   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:10.362875   54335 cri.go:89] found id: ""
	I1205 06:32:10.362889   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.362896   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:10.362902   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:10.362959   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:10.393788   54335 cri.go:89] found id: ""
	I1205 06:32:10.393802   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.393810   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:10.393818   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:10.393829   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:10.459886   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:10.452427   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.452934   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454546   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454986   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.456499   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:10.452427   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.452934   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454546   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454986   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.456499   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:10.459895   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:10.459905   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:10.521460   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:10.521481   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:10.549040   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:10.549056   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:10.605396   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:10.605414   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:13.117854   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:13.128117   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:13.128179   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:13.153085   54335 cri.go:89] found id: ""
	I1205 06:32:13.153098   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.153105   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:13.153110   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:13.153199   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:13.178442   54335 cri.go:89] found id: ""
	I1205 06:32:13.178455   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.178462   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:13.178467   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:13.178524   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:13.203207   54335 cri.go:89] found id: ""
	I1205 06:32:13.203220   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.203229   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:13.203234   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:13.203292   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:13.228073   54335 cri.go:89] found id: ""
	I1205 06:32:13.228086   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.228093   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:13.228098   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:13.228159   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:13.253259   54335 cri.go:89] found id: ""
	I1205 06:32:13.253272   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.253288   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:13.253293   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:13.253350   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:13.278480   54335 cri.go:89] found id: ""
	I1205 06:32:13.278493   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.278500   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:13.278506   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:13.278562   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:13.301934   54335 cri.go:89] found id: ""
	I1205 06:32:13.301948   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.301955   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:13.301962   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:13.301972   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:13.356855   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:13.356876   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:13.368331   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:13.368352   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:13.438131   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:13.429738   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.430489   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432231   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432823   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.434562   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:13.429738   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.430489   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432231   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432823   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.434562   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:13.438141   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:13.438151   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:13.501680   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:13.501699   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:16.032304   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:16.042939   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:16.043006   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:16.069762   54335 cri.go:89] found id: ""
	I1205 06:32:16.069775   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.069782   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:16.069788   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:16.069844   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:16.094242   54335 cri.go:89] found id: ""
	I1205 06:32:16.094255   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.094264   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:16.094270   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:16.094336   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:16.120352   54335 cri.go:89] found id: ""
	I1205 06:32:16.120366   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.120373   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:16.120378   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:16.120435   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:16.149183   54335 cri.go:89] found id: ""
	I1205 06:32:16.149196   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.149203   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:16.149208   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:16.149270   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:16.179309   54335 cri.go:89] found id: ""
	I1205 06:32:16.179322   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.179328   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:16.179333   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:16.179388   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:16.204104   54335 cri.go:89] found id: ""
	I1205 06:32:16.204118   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.204125   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:16.204130   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:16.204190   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:16.230914   54335 cri.go:89] found id: ""
	I1205 06:32:16.230927   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.230934   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:16.230941   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:16.230950   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:16.286405   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:16.286423   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:16.297122   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:16.297136   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:16.367421   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:16.357623   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.358522   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360113   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360727   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.362335   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:16.357623   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.358522   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360113   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360727   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.362335   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:16.367430   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:16.367442   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:16.452050   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:16.452076   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:18.982231   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:18.992354   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:18.992412   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:19.017989   54335 cri.go:89] found id: ""
	I1205 06:32:19.018004   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.018011   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:19.018016   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:19.018077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:19.042217   54335 cri.go:89] found id: ""
	I1205 06:32:19.042230   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.042237   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:19.042242   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:19.042301   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:19.066699   54335 cri.go:89] found id: ""
	I1205 06:32:19.066713   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.066720   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:19.066725   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:19.066785   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:19.095590   54335 cri.go:89] found id: ""
	I1205 06:32:19.095603   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.095610   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:19.095616   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:19.095672   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:19.119155   54335 cri.go:89] found id: ""
	I1205 06:32:19.119169   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.119176   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:19.119181   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:19.119237   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:19.142787   54335 cri.go:89] found id: ""
	I1205 06:32:19.142801   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.142807   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:19.142813   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:19.142873   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:19.168013   54335 cri.go:89] found id: ""
	I1205 06:32:19.168025   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.168032   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:19.168039   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:19.168051   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:19.178464   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:19.178481   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:19.240233   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:19.233298   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.233706   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235213   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235526   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.236960   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:19.233298   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.233706   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235213   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235526   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.236960   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:19.240244   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:19.240253   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:19.300198   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:19.300217   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:19.329682   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:19.329697   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:21.888551   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:21.898274   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:21.898337   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:21.922474   54335 cri.go:89] found id: ""
	I1205 06:32:21.922486   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.922493   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:21.922498   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:21.922558   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:21.950761   54335 cri.go:89] found id: ""
	I1205 06:32:21.950775   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.950781   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:21.950786   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:21.950844   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:21.973829   54335 cri.go:89] found id: ""
	I1205 06:32:21.973843   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.973849   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:21.973854   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:21.973912   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:21.997620   54335 cri.go:89] found id: ""
	I1205 06:32:21.997634   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.997641   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:21.997647   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:21.997702   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:22.033207   54335 cri.go:89] found id: ""
	I1205 06:32:22.033221   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.033228   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:22.033234   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:22.033296   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:22.062888   54335 cri.go:89] found id: ""
	I1205 06:32:22.062902   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.062909   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:22.062915   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:22.062973   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:22.091975   54335 cri.go:89] found id: ""
	I1205 06:32:22.091989   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.091996   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:22.092004   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:22.092017   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:22.103145   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:22.103160   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:22.164851   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:22.156849   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.157640   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159268   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159573   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.161063   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:22.156849   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.157640   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159268   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159573   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.161063   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:22.164860   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:22.164870   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:22.226105   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:22.226124   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:22.253915   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:22.253929   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:24.811993   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:24.821806   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:24.821865   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:24.845836   54335 cri.go:89] found id: ""
	I1205 06:32:24.845850   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.845857   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:24.845864   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:24.845919   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:24.870475   54335 cri.go:89] found id: ""
	I1205 06:32:24.870489   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.870496   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:24.870505   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:24.870560   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:24.895049   54335 cri.go:89] found id: ""
	I1205 06:32:24.895061   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.895068   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:24.895074   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:24.895130   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:24.924307   54335 cri.go:89] found id: ""
	I1205 06:32:24.924320   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.924327   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:24.924332   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:24.924390   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:24.949595   54335 cri.go:89] found id: ""
	I1205 06:32:24.949608   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.949616   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:24.949621   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:24.949680   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:24.974582   54335 cri.go:89] found id: ""
	I1205 06:32:24.974595   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.974602   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:24.974607   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:24.974664   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:25.003723   54335 cri.go:89] found id: ""
	I1205 06:32:25.003739   54335 logs.go:282] 0 containers: []
	W1205 06:32:25.003747   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:25.003755   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:25.003766   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:25.065829   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:25.065846   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:25.077220   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:25.077236   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:25.140111   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:25.132731   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.133376   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.134862   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.135186   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.136712   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:25.132731   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.133376   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.134862   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.135186   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.136712   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:25.140121   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:25.140135   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:25.206118   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:25.206137   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:27.733938   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:27.744224   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:27.744282   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:27.769011   54335 cri.go:89] found id: ""
	I1205 06:32:27.769024   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.769031   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:27.769036   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:27.769094   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:27.793434   54335 cri.go:89] found id: ""
	I1205 06:32:27.793448   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.793455   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:27.793460   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:27.793556   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:27.821088   54335 cri.go:89] found id: ""
	I1205 06:32:27.821101   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.821108   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:27.821112   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:27.821209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:27.847229   54335 cri.go:89] found id: ""
	I1205 06:32:27.847242   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.847249   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:27.847254   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:27.847310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:27.870944   54335 cri.go:89] found id: ""
	I1205 06:32:27.870958   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.870965   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:27.870970   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:27.871031   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:27.895361   54335 cri.go:89] found id: ""
	I1205 06:32:27.895375   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.895382   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:27.895388   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:27.895445   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:27.920868   54335 cri.go:89] found id: ""
	I1205 06:32:27.920881   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.920888   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:27.920897   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:27.920908   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:27.984326   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:27.984346   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:28.018053   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:28.018070   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:28.075646   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:28.075663   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:28.087097   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:28.087112   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:28.151403   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:28.143072   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.143826   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.145655   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.146333   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.147993   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:28.143072   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.143826   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.145655   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.146333   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.147993   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:30.651598   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:30.661458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:30.661527   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:30.689413   54335 cri.go:89] found id: ""
	I1205 06:32:30.689426   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.689443   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:30.689450   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:30.689523   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:30.712971   54335 cri.go:89] found id: ""
	I1205 06:32:30.712987   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.712994   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:30.712999   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:30.713057   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:30.737851   54335 cri.go:89] found id: ""
	I1205 06:32:30.737871   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.737879   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:30.737884   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:30.737945   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:30.761745   54335 cri.go:89] found id: ""
	I1205 06:32:30.761759   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.761766   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:30.761771   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:30.761836   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:30.784898   54335 cri.go:89] found id: ""
	I1205 06:32:30.784912   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.784919   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:30.784924   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:30.784980   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:30.810894   54335 cri.go:89] found id: ""
	I1205 06:32:30.810908   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.810915   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:30.810920   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:30.810976   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:30.839604   54335 cri.go:89] found id: ""
	I1205 06:32:30.839617   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.839623   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:30.839636   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:30.839647   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:30.865641   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:30.865658   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:30.921606   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:30.921625   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:30.932281   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:30.932297   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:30.995168   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:30.987715   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.988222   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.989765   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.990119   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.991752   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:30.987715   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.988222   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.989765   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.990119   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.991752   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:30.995177   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:30.995187   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:33.558401   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:33.568813   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:33.568893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:33.596483   54335 cri.go:89] found id: ""
	I1205 06:32:33.596496   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.596503   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:33.596508   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:33.596566   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:33.624025   54335 cri.go:89] found id: ""
	I1205 06:32:33.624039   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.624046   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:33.624051   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:33.624108   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:33.655953   54335 cri.go:89] found id: ""
	I1205 06:32:33.655966   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.655974   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:33.655979   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:33.656039   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:33.684431   54335 cri.go:89] found id: ""
	I1205 06:32:33.684445   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.684452   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:33.684458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:33.684517   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:33.710631   54335 cri.go:89] found id: ""
	I1205 06:32:33.710644   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.710651   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:33.710656   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:33.710714   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:33.735367   54335 cri.go:89] found id: ""
	I1205 06:32:33.735380   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.735387   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:33.735393   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:33.735450   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:33.759636   54335 cri.go:89] found id: ""
	I1205 06:32:33.759650   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.759657   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:33.759664   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:33.759675   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:33.814547   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:33.814565   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:33.825805   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:33.825820   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:33.891604   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:33.884022   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.884634   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886235   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886812   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.888278   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:33.884022   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.884634   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886235   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886812   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.888278   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:33.891614   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:33.891624   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:33.953767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:33.953787   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:36.482228   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:36.492694   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:36.492753   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:36.518206   54335 cri.go:89] found id: ""
	I1205 06:32:36.518222   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.518229   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:36.518233   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:36.518290   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:36.543531   54335 cri.go:89] found id: ""
	I1205 06:32:36.543544   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.543551   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:36.543556   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:36.543615   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:36.567286   54335 cri.go:89] found id: ""
	I1205 06:32:36.567299   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.567306   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:36.567311   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:36.567367   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:36.592165   54335 cri.go:89] found id: ""
	I1205 06:32:36.592178   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.592185   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:36.592190   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:36.592246   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:36.621238   54335 cri.go:89] found id: ""
	I1205 06:32:36.621251   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.621258   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:36.621264   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:36.621329   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:36.646816   54335 cri.go:89] found id: ""
	I1205 06:32:36.646838   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.646845   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:36.646850   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:36.646917   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:36.672562   54335 cri.go:89] found id: ""
	I1205 06:32:36.672575   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.672582   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:36.672599   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:36.672609   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:36.727909   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:36.727926   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:36.738625   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:36.738641   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:36.803851   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:36.795935   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.796356   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.797950   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.798308   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.800017   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:36.795935   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.796356   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.797950   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.798308   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.800017   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:36.803861   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:36.803872   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:36.865831   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:36.865849   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:39.393852   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:39.404022   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:39.404090   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:39.433108   54335 cri.go:89] found id: ""
	I1205 06:32:39.433122   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.433129   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:39.433134   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:39.433218   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:39.458840   54335 cri.go:89] found id: ""
	I1205 06:32:39.458853   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.458862   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:39.458867   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:39.458923   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:39.483121   54335 cri.go:89] found id: ""
	I1205 06:32:39.483135   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.483142   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:39.483147   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:39.483203   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:39.508080   54335 cri.go:89] found id: ""
	I1205 06:32:39.508092   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.508100   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:39.508107   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:39.508166   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:39.532483   54335 cri.go:89] found id: ""
	I1205 06:32:39.532496   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.532503   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:39.532508   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:39.532563   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:39.556203   54335 cri.go:89] found id: ""
	I1205 06:32:39.556217   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.556224   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:39.556229   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:39.556286   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:39.579787   54335 cri.go:89] found id: ""
	I1205 06:32:39.579802   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.579809   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:39.579818   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:39.579828   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:39.644828   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:39.644847   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:39.657327   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:39.657341   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:39.724034   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:39.716361   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.716905   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718372   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718880   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.720306   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:39.716361   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.716905   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718372   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718880   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.720306   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:39.724044   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:39.724054   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:39.786205   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:39.786224   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:42.317043   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:42.327925   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:42.327988   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:42.353925   54335 cri.go:89] found id: ""
	I1205 06:32:42.353939   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.353946   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:42.353952   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:42.354013   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:42.385300   54335 cri.go:89] found id: ""
	I1205 06:32:42.385314   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.385321   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:42.385326   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:42.385385   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:42.411306   54335 cri.go:89] found id: ""
	I1205 06:32:42.411319   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.411326   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:42.411331   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:42.411389   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:42.436499   54335 cri.go:89] found id: ""
	I1205 06:32:42.436513   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.436520   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:42.436526   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:42.436590   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:42.461983   54335 cri.go:89] found id: ""
	I1205 06:32:42.462000   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.462008   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:42.462013   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:42.462072   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:42.490948   54335 cri.go:89] found id: ""
	I1205 06:32:42.490962   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.490971   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:42.490976   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:42.491036   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:42.515766   54335 cri.go:89] found id: ""
	I1205 06:32:42.515785   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.515793   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:42.515800   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:42.515810   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:42.571249   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:42.571267   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:42.582146   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:42.582161   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:42.671227   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:42.659945   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.660555   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.665688   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.666233   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.667791   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:42.659945   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.660555   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.665688   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.666233   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.667791   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:42.671236   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:42.671247   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:42.733761   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:42.733780   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:45.261718   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:45.276631   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:45.276700   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:45.305280   54335 cri.go:89] found id: ""
	I1205 06:32:45.305296   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.305304   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:45.305309   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:45.305375   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:45.332314   54335 cri.go:89] found id: ""
	I1205 06:32:45.332407   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.332482   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:45.332488   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:45.332551   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:45.368080   54335 cri.go:89] found id: ""
	I1205 06:32:45.368141   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.368165   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:45.368171   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:45.368336   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:45.400257   54335 cri.go:89] found id: ""
	I1205 06:32:45.400284   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.400292   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:45.400298   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:45.400368   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:45.425301   54335 cri.go:89] found id: ""
	I1205 06:32:45.425314   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.425321   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:45.425327   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:45.425385   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:45.450756   54335 cri.go:89] found id: ""
	I1205 06:32:45.450769   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.450777   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:45.450782   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:45.450845   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:45.481391   54335 cri.go:89] found id: ""
	I1205 06:32:45.481405   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.481413   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:45.481421   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:45.481441   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:45.539446   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:45.539465   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:45.550849   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:45.550865   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:45.628789   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:45.621303   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.621758   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623274   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623572   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.625028   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:45.621303   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.621758   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623274   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623572   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.625028   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:45.628800   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:45.628810   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:45.699540   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:45.699558   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:48.227049   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:48.237481   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:48.237550   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:48.267696   54335 cri.go:89] found id: ""
	I1205 06:32:48.267709   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.267716   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:48.267721   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:48.267789   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:48.294097   54335 cri.go:89] found id: ""
	I1205 06:32:48.294112   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.294118   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:48.294124   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:48.294186   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:48.324117   54335 cri.go:89] found id: ""
	I1205 06:32:48.324131   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.324139   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:48.324144   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:48.324203   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:48.349743   54335 cri.go:89] found id: ""
	I1205 06:32:48.349758   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.349765   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:48.349781   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:48.349849   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:48.379197   54335 cri.go:89] found id: ""
	I1205 06:32:48.379211   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.379219   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:48.379225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:48.379283   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:48.404472   54335 cri.go:89] found id: ""
	I1205 06:32:48.404486   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.404493   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:48.404499   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:48.404555   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:48.430058   54335 cri.go:89] found id: ""
	I1205 06:32:48.430072   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.430079   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:48.430086   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:48.430099   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:48.459503   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:48.459519   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:48.518141   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:48.518158   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:48.529014   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:48.529031   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:48.601337   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:48.590875   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.591327   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.593747   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.595586   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.596337   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:48.590875   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.591327   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.593747   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.595586   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.596337   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:48.601347   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:48.601357   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:51.177615   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:51.187543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:51.187599   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:51.218589   54335 cri.go:89] found id: ""
	I1205 06:32:51.218603   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.218610   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:51.218615   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:51.218673   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:51.243490   54335 cri.go:89] found id: ""
	I1205 06:32:51.243509   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.243516   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:51.243521   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:51.243577   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:51.268372   54335 cri.go:89] found id: ""
	I1205 06:32:51.268385   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.268393   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:51.268398   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:51.268458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:51.292432   54335 cri.go:89] found id: ""
	I1205 06:32:51.292445   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.292452   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:51.292457   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:51.292513   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:51.316338   54335 cri.go:89] found id: ""
	I1205 06:32:51.316351   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.316358   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:51.316364   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:51.316419   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:51.341611   54335 cri.go:89] found id: ""
	I1205 06:32:51.341625   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.341645   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:51.341650   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:51.341708   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:51.365650   54335 cri.go:89] found id: ""
	I1205 06:32:51.365664   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.365671   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:51.365679   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:51.365690   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:51.377639   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:51.377655   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:51.443518   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:51.435665   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.436407   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438103   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438498   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.439930   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:51.435665   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.436407   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438103   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438498   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.439930   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:51.443527   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:51.443540   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:51.505744   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:51.505763   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:51.532869   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:51.532884   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:54.096225   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:54.106698   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:54.106760   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:54.134689   54335 cri.go:89] found id: ""
	I1205 06:32:54.134702   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.134709   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:54.134714   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:54.134769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:54.158113   54335 cri.go:89] found id: ""
	I1205 06:32:54.158126   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.158133   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:54.158138   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:54.158199   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:54.182422   54335 cri.go:89] found id: ""
	I1205 06:32:54.182436   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.182444   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:54.182448   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:54.182508   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:54.206399   54335 cri.go:89] found id: ""
	I1205 06:32:54.206412   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.206418   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:54.206423   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:54.206481   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:54.229926   54335 cri.go:89] found id: ""
	I1205 06:32:54.229940   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.229947   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:54.229952   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:54.230011   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:54.254356   54335 cri.go:89] found id: ""
	I1205 06:32:54.254370   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.254377   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:54.254382   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:54.254441   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:54.278495   54335 cri.go:89] found id: ""
	I1205 06:32:54.278508   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.278516   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:54.278523   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:54.278533   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:54.305603   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:54.305619   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:54.360184   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:54.360202   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:54.371510   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:54.371525   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:54.438927   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:54.429388   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.430239   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.432334   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.433110   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.435152   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:54.429388   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.430239   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.432334   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.433110   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.435152   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:54.438936   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:54.438947   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:57.002913   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:57.020172   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:57.020235   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:57.044543   54335 cri.go:89] found id: ""
	I1205 06:32:57.044556   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.044564   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:57.044570   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:57.044629   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:57.070053   54335 cri.go:89] found id: ""
	I1205 06:32:57.070067   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.070074   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:57.070079   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:57.070134   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:57.094644   54335 cri.go:89] found id: ""
	I1205 06:32:57.094659   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.094666   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:57.094670   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:57.094769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:57.118698   54335 cri.go:89] found id: ""
	I1205 06:32:57.118722   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.118729   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:57.118734   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:57.118799   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:57.142854   54335 cri.go:89] found id: ""
	I1205 06:32:57.142868   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.142875   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:57.142881   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:57.142946   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:57.171220   54335 cri.go:89] found id: ""
	I1205 06:32:57.171234   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.171241   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:57.171246   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:57.171311   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:57.195529   54335 cri.go:89] found id: ""
	I1205 06:32:57.195544   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.195551   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:57.195558   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:57.195578   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:57.251284   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:57.251305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:57.262555   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:57.262570   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:57.333629   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:57.326387   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.326886   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328440   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328930   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.330375   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:57.326387   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.326886   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328440   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328930   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.330375   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:57.333638   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:57.333651   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:57.394773   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:57.394791   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:59.923047   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:59.933128   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:59.933207   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:59.960876   54335 cri.go:89] found id: ""
	I1205 06:32:59.960890   54335 logs.go:282] 0 containers: []
	W1205 06:32:59.960896   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:59.960901   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:59.960961   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:59.985649   54335 cri.go:89] found id: ""
	I1205 06:32:59.985664   54335 logs.go:282] 0 containers: []
	W1205 06:32:59.985671   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:59.985676   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:59.985737   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:00.069985   54335 cri.go:89] found id: ""
	I1205 06:33:00.070002   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.070019   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:00.070026   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:00.070103   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:00.156917   54335 cri.go:89] found id: ""
	I1205 06:33:00.156936   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.156945   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:00.156958   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:00.157043   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:00.284647   54335 cri.go:89] found id: ""
	I1205 06:33:00.284663   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.284672   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:00.284678   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:00.284758   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:00.335248   54335 cri.go:89] found id: ""
	I1205 06:33:00.335263   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.335271   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:00.335280   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:00.335365   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:00.377235   54335 cri.go:89] found id: ""
	I1205 06:33:00.377251   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.377259   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:00.377267   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:00.377291   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:00.390543   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:00.390561   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:00.464312   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:00.454965   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.455845   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.457669   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.458537   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.460402   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:00.454965   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.455845   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.457669   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.458537   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.460402   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:00.464323   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:00.464334   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:00.528767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:00.528786   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:00.562265   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:00.562282   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:03.126784   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:03.137248   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:03.137309   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:03.163136   54335 cri.go:89] found id: ""
	I1205 06:33:03.163149   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.163156   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:03.163161   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:03.163221   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:03.189239   54335 cri.go:89] found id: ""
	I1205 06:33:03.189253   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.189261   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:03.189277   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:03.189340   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:03.215019   54335 cri.go:89] found id: ""
	I1205 06:33:03.215032   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.215039   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:03.215045   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:03.215104   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:03.240336   54335 cri.go:89] found id: ""
	I1205 06:33:03.240350   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.240357   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:03.240362   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:03.240421   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:03.264735   54335 cri.go:89] found id: ""
	I1205 06:33:03.264749   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.264762   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:03.264767   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:03.264831   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:03.289528   54335 cri.go:89] found id: ""
	I1205 06:33:03.289541   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.289548   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:03.289553   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:03.289658   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:03.315032   54335 cri.go:89] found id: ""
	I1205 06:33:03.315046   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.315053   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:03.315060   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:03.315071   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:03.371569   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:03.371588   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:03.382809   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:03.382825   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:03.450556   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:03.442547   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.443142   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445000   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445833   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.446990   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:03.442547   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.443142   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445000   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445833   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.446990   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:03.450566   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:03.450577   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:03.516929   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:03.516948   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:06.046009   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:06.057281   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:06.057355   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:06.084601   54335 cri.go:89] found id: ""
	I1205 06:33:06.084615   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.084623   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:06.084629   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:06.084690   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:06.111286   54335 cri.go:89] found id: ""
	I1205 06:33:06.111300   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.111307   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:06.111313   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:06.111374   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:06.136965   54335 cri.go:89] found id: ""
	I1205 06:33:06.136978   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.136985   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:06.136990   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:06.137048   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:06.162299   54335 cri.go:89] found id: ""
	I1205 06:33:06.162312   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.162319   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:06.162325   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:06.162387   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:06.189555   54335 cri.go:89] found id: ""
	I1205 06:33:06.189569   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.189576   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:06.189581   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:06.189645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:06.215170   54335 cri.go:89] found id: ""
	I1205 06:33:06.215184   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.215192   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:06.215198   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:06.215258   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:06.241073   54335 cri.go:89] found id: ""
	I1205 06:33:06.241087   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.241094   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:06.241112   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:06.241123   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:06.296188   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:06.296205   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:06.306926   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:06.306941   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:06.371295   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:06.363444   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.364162   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.365700   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.366364   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.367956   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:06.363444   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.364162   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.365700   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.366364   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.367956   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:06.371304   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:06.371316   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:06.432933   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:06.432951   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:08.969294   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:08.979402   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:08.979463   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:09.020683   54335 cri.go:89] found id: ""
	I1205 06:33:09.020697   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.020704   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:09.020710   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:09.020771   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:09.046109   54335 cri.go:89] found id: ""
	I1205 06:33:09.046123   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.046130   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:09.046136   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:09.046195   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:09.070968   54335 cri.go:89] found id: ""
	I1205 06:33:09.070981   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.070988   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:09.070995   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:09.071056   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:09.096098   54335 cri.go:89] found id: ""
	I1205 06:33:09.096111   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.096118   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:09.096123   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:09.096226   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:09.121468   54335 cri.go:89] found id: ""
	I1205 06:33:09.121482   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.121489   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:09.121495   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:09.121573   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:09.150975   54335 cri.go:89] found id: ""
	I1205 06:33:09.150989   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.150997   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:09.151004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:09.151063   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:09.176504   54335 cri.go:89] found id: ""
	I1205 06:33:09.176517   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.176527   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:09.176534   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:09.176545   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:09.203288   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:09.203302   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:09.259402   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:09.259423   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:09.270454   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:09.270470   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:09.334084   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:09.326438   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.326872   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328506   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328861   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.330463   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:09.326438   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.326872   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328506   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328861   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.330463   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:09.334095   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:09.334105   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:11.894816   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:11.904810   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:11.904871   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:11.930015   54335 cri.go:89] found id: ""
	I1205 06:33:11.930029   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.930036   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:11.930042   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:11.930100   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:11.954795   54335 cri.go:89] found id: ""
	I1205 06:33:11.954808   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.954815   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:11.954821   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:11.954877   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:11.978195   54335 cri.go:89] found id: ""
	I1205 06:33:11.978208   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.978231   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:11.978236   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:11.978292   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:12.003210   54335 cri.go:89] found id: ""
	I1205 06:33:12.003227   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.003235   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:12.003241   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:12.003326   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:12.033020   54335 cri.go:89] found id: ""
	I1205 06:33:12.033034   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.033041   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:12.033046   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:12.033111   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:12.058060   54335 cri.go:89] found id: ""
	I1205 06:33:12.058073   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.058081   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:12.058086   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:12.058143   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:12.082699   54335 cri.go:89] found id: ""
	I1205 06:33:12.082713   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.082719   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:12.082727   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:12.082737   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:12.151250   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:12.142947   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.143602   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145353   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145952   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.147593   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:12.142947   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.143602   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145353   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145952   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.147593   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:12.151259   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:12.151271   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:12.218438   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:12.218461   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:12.248241   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:12.248260   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:12.307820   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:12.307838   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:14.820623   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:14.830697   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:14.830756   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:14.863478   54335 cri.go:89] found id: ""
	I1205 06:33:14.863492   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.863499   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:14.863504   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:14.863565   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:14.895084   54335 cri.go:89] found id: ""
	I1205 06:33:14.895098   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.895106   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:14.895111   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:14.895172   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:14.925468   54335 cri.go:89] found id: ""
	I1205 06:33:14.925482   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.925489   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:14.925494   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:14.925614   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:14.954925   54335 cri.go:89] found id: ""
	I1205 06:33:14.954938   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.954945   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:14.954950   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:14.955009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:14.980066   54335 cri.go:89] found id: ""
	I1205 06:33:14.980080   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.980088   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:14.980093   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:14.980152   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:15.028743   54335 cri.go:89] found id: ""
	I1205 06:33:15.028763   54335 logs.go:282] 0 containers: []
	W1205 06:33:15.028770   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:15.028777   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:15.028845   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:15.057623   54335 cri.go:89] found id: ""
	I1205 06:33:15.057636   54335 logs.go:282] 0 containers: []
	W1205 06:33:15.057643   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:15.057650   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:15.057661   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:15.114789   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:15.114808   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:15.126224   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:15.126240   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:15.193033   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:15.184929   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.185789   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187491   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187821   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.189530   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:15.184929   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.185789   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187491   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187821   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.189530   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:15.193044   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:15.193054   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:15.256748   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:15.256767   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:17.786454   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:17.796729   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:17.796787   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:17.825815   54335 cri.go:89] found id: ""
	I1205 06:33:17.825828   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.825835   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:17.825840   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:17.825900   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:17.855661   54335 cri.go:89] found id: ""
	I1205 06:33:17.855675   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.855682   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:17.855687   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:17.855744   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:17.883175   54335 cri.go:89] found id: ""
	I1205 06:33:17.883188   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.883195   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:17.883200   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:17.883260   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:17.911578   54335 cri.go:89] found id: ""
	I1205 06:33:17.911592   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.911599   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:17.911604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:17.911662   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:17.939731   54335 cri.go:89] found id: ""
	I1205 06:33:17.939750   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.939758   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:17.939763   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:17.939818   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:17.968310   54335 cri.go:89] found id: ""
	I1205 06:33:17.968323   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.968330   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:17.968335   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:17.968392   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:17.992739   54335 cri.go:89] found id: ""
	I1205 06:33:17.992752   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.992759   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:17.992765   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:17.992776   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:18.006966   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:18.006985   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:18.077932   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:18.067988   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.068697   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.071196   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.072003   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.073192   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:18.067988   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.068697   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.071196   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.072003   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.073192   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:18.077943   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:18.077954   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:18.141190   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:18.141206   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:18.172978   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:18.172995   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:20.730714   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:20.741267   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:20.741329   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:20.765738   54335 cri.go:89] found id: ""
	I1205 06:33:20.765751   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.765758   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:20.765763   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:20.765821   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:20.790360   54335 cri.go:89] found id: ""
	I1205 06:33:20.790373   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.790380   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:20.790385   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:20.790446   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:20.815276   54335 cri.go:89] found id: ""
	I1205 06:33:20.815290   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.815297   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:20.815302   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:20.815361   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:20.840257   54335 cri.go:89] found id: ""
	I1205 06:33:20.840270   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.840277   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:20.840283   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:20.840345   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:20.869989   54335 cri.go:89] found id: ""
	I1205 06:33:20.870003   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.870010   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:20.870015   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:20.870077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:20.908890   54335 cri.go:89] found id: ""
	I1205 06:33:20.908903   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.908915   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:20.908921   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:20.908978   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:20.935421   54335 cri.go:89] found id: ""
	I1205 06:33:20.935435   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.935442   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:20.935450   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:20.935460   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:20.946582   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:20.946597   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:21.010138   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:20.999742   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.000436   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.002782   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.003697   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.005641   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:20.999742   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.000436   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.002782   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.003697   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.005641   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:21.010149   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:21.010172   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:21.077392   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:21.077409   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:21.105240   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:21.105255   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:23.662909   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:23.672961   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:23.673022   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:23.697989   54335 cri.go:89] found id: ""
	I1205 06:33:23.698003   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.698010   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:23.698016   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:23.698078   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:23.723698   54335 cri.go:89] found id: ""
	I1205 06:33:23.723712   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.723718   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:23.723723   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:23.723781   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:23.747403   54335 cri.go:89] found id: ""
	I1205 06:33:23.747416   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.747423   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:23.747428   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:23.747486   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:23.775201   54335 cri.go:89] found id: ""
	I1205 06:33:23.775214   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.775221   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:23.775227   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:23.775290   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:23.799494   54335 cri.go:89] found id: ""
	I1205 06:33:23.799507   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.799514   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:23.799519   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:23.799575   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:23.824229   54335 cri.go:89] found id: ""
	I1205 06:33:23.824242   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.824249   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:23.824254   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:23.824310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:23.851738   54335 cri.go:89] found id: ""
	I1205 06:33:23.851752   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.851759   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:23.851767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:23.851777   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:23.897695   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:23.897710   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:23.961464   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:23.961482   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:23.972542   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:23.972558   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:24.046391   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:24.038441   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.039274   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.040964   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.041464   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.043066   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:24.038441   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.039274   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.040964   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.041464   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.043066   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:24.046402   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:24.046414   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:26.611978   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:26.621743   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:26.621802   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:26.645855   54335 cri.go:89] found id: ""
	I1205 06:33:26.645868   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.645875   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:26.645879   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:26.645934   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:26.675349   54335 cri.go:89] found id: ""
	I1205 06:33:26.675363   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.675369   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:26.675374   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:26.675430   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:26.698540   54335 cri.go:89] found id: ""
	I1205 06:33:26.698554   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.698561   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:26.698566   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:26.698630   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:26.721264   54335 cri.go:89] found id: ""
	I1205 06:33:26.721277   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.721283   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:26.721288   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:26.721343   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:26.744526   54335 cri.go:89] found id: ""
	I1205 06:33:26.744539   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.744546   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:26.744551   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:26.744607   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:26.767695   54335 cri.go:89] found id: ""
	I1205 06:33:26.767719   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.767727   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:26.767732   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:26.767792   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:26.791289   54335 cri.go:89] found id: ""
	I1205 06:33:26.791329   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.791336   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:26.791344   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:26.791354   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:26.856152   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:26.845400   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.846423   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.848401   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.849234   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.850202   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:26.845400   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.846423   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.848401   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.849234   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.850202   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:26.856162   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:26.856173   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:26.930967   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:26.930987   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:26.958183   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:26.958200   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:27.015910   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:27.015927   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:29.527097   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:29.537027   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:29.537087   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:29.561570   54335 cri.go:89] found id: ""
	I1205 06:33:29.561583   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.561591   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:29.561598   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:29.561655   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:29.586431   54335 cri.go:89] found id: ""
	I1205 06:33:29.586445   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.586452   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:29.586474   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:29.586543   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:29.615124   54335 cri.go:89] found id: ""
	I1205 06:33:29.615139   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.615145   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:29.615151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:29.615208   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:29.640801   54335 cri.go:89] found id: ""
	I1205 06:33:29.640814   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.640831   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:29.640837   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:29.640893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:29.665711   54335 cri.go:89] found id: ""
	I1205 06:33:29.665725   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.665731   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:29.665737   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:29.665797   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:29.690393   54335 cri.go:89] found id: ""
	I1205 06:33:29.690416   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.690423   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:29.690428   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:29.690500   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:29.714522   54335 cri.go:89] found id: ""
	I1205 06:33:29.714535   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.714542   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:29.714550   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:29.714562   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:29.770787   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:29.770804   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:29.781149   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:29.781179   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:29.848588   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:29.838965   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.839369   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.840958   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.841406   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.842881   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:29.838965   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.839369   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.840958   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.841406   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.842881   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:29.848601   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:29.848612   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:29.927646   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:29.927665   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:32.455807   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:32.466055   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:32.466118   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:32.490796   54335 cri.go:89] found id: ""
	I1205 06:33:32.490809   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.490816   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:32.490822   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:32.490881   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:32.515490   54335 cri.go:89] found id: ""
	I1205 06:33:32.515503   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.515511   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:32.515516   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:32.515577   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:32.543147   54335 cri.go:89] found id: ""
	I1205 06:33:32.543161   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.543167   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:32.543172   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:32.543234   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:32.567288   54335 cri.go:89] found id: ""
	I1205 06:33:32.567301   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.567308   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:32.567313   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:32.567370   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:32.594765   54335 cri.go:89] found id: ""
	I1205 06:33:32.594778   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.594785   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:32.594790   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:32.594846   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:32.628174   54335 cri.go:89] found id: ""
	I1205 06:33:32.628187   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.628208   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:32.628223   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:32.628310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:32.653204   54335 cri.go:89] found id: ""
	I1205 06:33:32.653218   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.653225   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:32.653232   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:32.653242   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:32.713436   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:32.713452   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:32.723879   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:32.723894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:32.788746   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:32.780259   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.780873   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.782713   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.783204   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.784855   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:32.780259   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.780873   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.782713   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.783204   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.784855   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:32.788757   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:32.788767   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:32.850792   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:32.850809   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:35.388187   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:35.398195   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:35.398254   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:35.421975   54335 cri.go:89] found id: ""
	I1205 06:33:35.421989   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.421996   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:35.422002   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:35.422065   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:35.445920   54335 cri.go:89] found id: ""
	I1205 06:33:35.445934   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.445942   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:35.445947   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:35.446009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:35.471144   54335 cri.go:89] found id: ""
	I1205 06:33:35.471157   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.471164   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:35.471169   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:35.471231   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:35.495788   54335 cri.go:89] found id: ""
	I1205 06:33:35.495802   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.495808   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:35.495814   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:35.495871   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:35.524598   54335 cri.go:89] found id: ""
	I1205 06:33:35.524621   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.524628   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:35.524633   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:35.524701   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:35.549143   54335 cri.go:89] found id: ""
	I1205 06:33:35.549227   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.549235   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:35.549242   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:35.549301   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:35.574312   54335 cri.go:89] found id: ""
	I1205 06:33:35.574325   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.574332   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:35.574340   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:35.574352   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:35.628890   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:35.628908   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:35.639919   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:35.639934   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:35.703264   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:35.695689   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.696286   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.697814   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.698255   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.699741   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:35.695689   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.696286   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.697814   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.698255   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.699741   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:35.703273   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:35.703286   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:35.766049   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:35.766067   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:38.297790   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:38.307702   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:38.307762   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:38.336326   54335 cri.go:89] found id: ""
	I1205 06:33:38.336340   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.336348   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:38.336353   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:38.336410   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:38.361342   54335 cri.go:89] found id: ""
	I1205 06:33:38.361356   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.361363   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:38.361371   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:38.361429   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:38.385186   54335 cri.go:89] found id: ""
	I1205 06:33:38.385200   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.385208   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:38.385213   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:38.385281   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:38.413803   54335 cri.go:89] found id: ""
	I1205 06:33:38.413816   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.413824   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:38.413829   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:38.413889   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:38.437536   54335 cri.go:89] found id: ""
	I1205 06:33:38.437572   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.437579   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:38.437585   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:38.437645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:38.462979   54335 cri.go:89] found id: ""
	I1205 06:33:38.462993   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.463000   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:38.463006   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:38.463069   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:38.488151   54335 cri.go:89] found id: ""
	I1205 06:33:38.488163   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.488170   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:38.488186   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:38.488196   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:38.544680   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:38.544696   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:38.555626   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:38.555641   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:38.618692   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:38.610579   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.611054   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.612674   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.613205   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.614695   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:38.610579   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.611054   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.612674   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.613205   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.614695   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:38.618701   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:38.618712   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:38.682609   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:38.682629   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:41.211631   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:41.221454   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:41.221514   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:41.245435   54335 cri.go:89] found id: ""
	I1205 06:33:41.245448   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.245455   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:41.245460   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:41.245516   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:41.268900   54335 cri.go:89] found id: ""
	I1205 06:33:41.268913   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.268920   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:41.268925   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:41.268980   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:41.297438   54335 cri.go:89] found id: ""
	I1205 06:33:41.297452   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.297460   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:41.297471   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:41.297536   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:41.325936   54335 cri.go:89] found id: ""
	I1205 06:33:41.325949   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.325956   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:41.325962   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:41.326036   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:41.354117   54335 cri.go:89] found id: ""
	I1205 06:33:41.354131   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.354138   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:41.354152   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:41.354209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:41.378638   54335 cri.go:89] found id: ""
	I1205 06:33:41.378651   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.378658   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:41.378664   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:41.378720   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:41.407136   54335 cri.go:89] found id: ""
	I1205 06:33:41.407150   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.407157   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:41.407164   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:41.407176   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:41.466362   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:41.466385   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:41.477977   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:41.477993   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:41.544052   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:41.534487   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.535316   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537008   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537377   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.540464   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:41.534487   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.535316   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537008   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537377   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.540464   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:41.544062   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:41.544073   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:41.606455   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:41.606472   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:44.134370   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:44.145440   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:44.145497   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:44.171961   54335 cri.go:89] found id: ""
	I1205 06:33:44.171975   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.171982   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:44.171987   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:44.172046   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:44.197113   54335 cri.go:89] found id: ""
	I1205 06:33:44.197127   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.197134   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:44.197138   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:44.197210   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:44.222364   54335 cri.go:89] found id: ""
	I1205 06:33:44.222378   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.222385   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:44.222390   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:44.222449   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:44.252062   54335 cri.go:89] found id: ""
	I1205 06:33:44.252075   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.252082   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:44.252087   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:44.252143   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:44.277356   54335 cri.go:89] found id: ""
	I1205 06:33:44.277370   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.277377   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:44.277382   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:44.277440   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:44.302126   54335 cri.go:89] found id: ""
	I1205 06:33:44.302139   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.302146   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:44.302151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:44.302214   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:44.326368   54335 cri.go:89] found id: ""
	I1205 06:33:44.326382   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.326389   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:44.326396   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:44.326406   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:44.382509   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:44.382526   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:44.393060   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:44.393075   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:44.454175   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:44.446190   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.447001   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.448578   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.449122   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.450707   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:44.446190   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.447001   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.448578   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.449122   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.450707   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:44.454185   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:44.454195   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:44.516835   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:44.516854   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:47.045086   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:47.055463   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:47.055525   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:47.080371   54335 cri.go:89] found id: ""
	I1205 06:33:47.080384   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.080391   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:47.080396   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:47.080458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:47.119514   54335 cri.go:89] found id: ""
	I1205 06:33:47.119527   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.119535   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:47.119539   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:47.119594   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:47.147444   54335 cri.go:89] found id: ""
	I1205 06:33:47.147457   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.147464   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:47.147469   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:47.147523   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:47.177712   54335 cri.go:89] found id: ""
	I1205 06:33:47.177726   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.177733   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:47.177738   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:47.177800   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:47.202097   54335 cri.go:89] found id: ""
	I1205 06:33:47.202110   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.202118   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:47.202124   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:47.202179   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:47.226333   54335 cri.go:89] found id: ""
	I1205 06:33:47.226347   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.226354   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:47.226359   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:47.226431   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:47.251986   54335 cri.go:89] found id: ""
	I1205 06:33:47.251999   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.252007   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:47.252014   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:47.252025   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:47.308015   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:47.308032   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:47.318805   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:47.318820   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:47.387458   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:47.379184   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.379724   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.381602   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.382334   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.383761   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:47.379184   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.379724   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.381602   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.382334   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.383761   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:47.387468   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:47.387478   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:47.448913   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:47.448930   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:49.981882   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:49.991852   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:49.991908   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:50.023200   54335 cri.go:89] found id: ""
	I1205 06:33:50.023221   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.023229   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:50.023235   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:50.023306   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:50.049577   54335 cri.go:89] found id: ""
	I1205 06:33:50.049591   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.049598   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:50.049604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:50.049665   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:50.078681   54335 cri.go:89] found id: ""
	I1205 06:33:50.078695   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.078703   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:50.078708   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:50.078769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:50.115465   54335 cri.go:89] found id: ""
	I1205 06:33:50.115478   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.115485   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:50.115496   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:50.115554   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:50.146578   54335 cri.go:89] found id: ""
	I1205 06:33:50.146591   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.146598   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:50.146603   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:50.146661   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:50.175515   54335 cri.go:89] found id: ""
	I1205 06:33:50.175528   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.175535   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:50.175541   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:50.175598   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:50.204420   54335 cri.go:89] found id: ""
	I1205 06:33:50.204433   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.204440   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:50.204449   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:50.204458   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:50.258843   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:50.258860   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:50.269324   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:50.269339   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:50.336484   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:50.328749   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.329537   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331160   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331458   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.332932   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:50.328749   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.329537   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331160   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331458   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.332932   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:50.336493   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:50.336515   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:50.399746   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:50.399764   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:52.927181   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:52.937445   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:52.937504   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:52.960934   54335 cri.go:89] found id: ""
	I1205 06:33:52.960947   54335 logs.go:282] 0 containers: []
	W1205 06:33:52.960954   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:52.960960   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:52.961022   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:52.986242   54335 cri.go:89] found id: ""
	I1205 06:33:52.986255   54335 logs.go:282] 0 containers: []
	W1205 06:33:52.986263   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:52.986268   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:52.986327   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:53.013571   54335 cri.go:89] found id: ""
	I1205 06:33:53.013585   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.013592   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:53.013597   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:53.013660   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:53.039257   54335 cri.go:89] found id: ""
	I1205 06:33:53.039271   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.039278   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:53.039284   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:53.039341   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:53.064162   54335 cri.go:89] found id: ""
	I1205 06:33:53.064174   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.064197   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:53.064202   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:53.064259   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:53.090118   54335 cri.go:89] found id: ""
	I1205 06:33:53.090131   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.090138   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:53.090143   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:53.090211   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:53.129452   54335 cri.go:89] found id: ""
	I1205 06:33:53.129464   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.129471   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:53.129478   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:53.129489   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:53.192396   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:53.192413   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:53.203770   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:53.203784   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:53.268406   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:53.260521   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.261282   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.262897   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.263184   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.264650   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:53.260521   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.261282   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.262897   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.263184   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.264650   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:53.268415   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:53.268427   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:53.331135   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:53.331156   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:55.857914   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:55.868426   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:55.868484   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:55.893813   54335 cri.go:89] found id: ""
	I1205 06:33:55.893826   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.893833   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:55.893838   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:55.893898   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:55.917807   54335 cri.go:89] found id: ""
	I1205 06:33:55.917820   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.917827   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:55.917832   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:55.917890   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:55.942437   54335 cri.go:89] found id: ""
	I1205 06:33:55.942450   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.942457   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:55.942462   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:55.942520   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:55.967048   54335 cri.go:89] found id: ""
	I1205 06:33:55.967061   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.967069   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:55.967075   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:55.967134   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:55.995796   54335 cri.go:89] found id: ""
	I1205 06:33:55.995809   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.995817   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:55.995822   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:55.995888   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:56.024165   54335 cri.go:89] found id: ""
	I1205 06:33:56.024179   54335 logs.go:282] 0 containers: []
	W1205 06:33:56.024186   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:56.024192   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:56.024255   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:56.050928   54335 cri.go:89] found id: ""
	I1205 06:33:56.050942   54335 logs.go:282] 0 containers: []
	W1205 06:33:56.050949   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:56.050957   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:56.050966   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:56.108175   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:56.108193   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:56.120521   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:56.120536   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:56.188922   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:56.181151   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.181776   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.183592   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.184091   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.185670   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:56.181151   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.181776   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.183592   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.184091   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.185670   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:56.188933   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:56.188944   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:56.250795   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:56.250813   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:58.783821   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:58.794017   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:58.794077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:58.818887   54335 cri.go:89] found id: ""
	I1205 06:33:58.818900   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.818907   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:58.818913   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:58.818970   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:58.843085   54335 cri.go:89] found id: ""
	I1205 06:33:58.843098   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.843105   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:58.843111   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:58.843173   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:58.873003   54335 cri.go:89] found id: ""
	I1205 06:33:58.873016   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.873024   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:58.873029   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:58.873087   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:58.898773   54335 cri.go:89] found id: ""
	I1205 06:33:58.898786   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.898793   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:58.898799   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:58.898857   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:58.923518   54335 cri.go:89] found id: ""
	I1205 06:33:58.923531   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.923538   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:58.923543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:58.923601   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:58.947602   54335 cri.go:89] found id: ""
	I1205 06:33:58.947615   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.947622   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:58.947627   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:58.947685   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:58.972459   54335 cri.go:89] found id: ""
	I1205 06:33:58.972473   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.972480   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:58.972488   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:58.972499   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:58.983301   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:58.983318   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:59.058445   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:59.050735   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.051328   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053133   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053776   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.054887   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:59.050735   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.051328   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053133   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053776   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.054887   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:59.058455   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:59.058468   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:59.121838   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:59.121859   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:59.153321   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:59.153345   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:01.714396   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:34:01.724655   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:34:01.724715   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:34:01.749246   54335 cri.go:89] found id: ""
	I1205 06:34:01.749259   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.749267   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:34:01.749272   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:34:01.749332   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:34:01.774227   54335 cri.go:89] found id: ""
	I1205 06:34:01.774240   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.774247   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:34:01.774253   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:34:01.774309   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:34:01.799574   54335 cri.go:89] found id: ""
	I1205 06:34:01.799588   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.799595   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:34:01.799600   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:34:01.799659   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:34:01.824994   54335 cri.go:89] found id: ""
	I1205 06:34:01.825008   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.825015   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:34:01.825020   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:34:01.825084   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:34:01.854353   54335 cri.go:89] found id: ""
	I1205 06:34:01.854367   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.854374   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:34:01.854380   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:34:01.854440   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:34:01.880365   54335 cri.go:89] found id: ""
	I1205 06:34:01.880379   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.880386   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:34:01.880392   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:34:01.880458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:34:01.906944   54335 cri.go:89] found id: ""
	I1205 06:34:01.906957   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.906964   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:34:01.906972   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:34:01.906982   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:34:01.938155   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:34:01.938171   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:01.992877   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:34:01.992895   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:34:02.007261   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:34:02.007278   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:34:02.080660   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:34:02.072024   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073018   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073709   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.075294   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.076108   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:34:02.072024   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073018   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073709   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.075294   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.076108   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:34:02.080669   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:34:02.080680   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:34:04.651581   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:34:04.661868   54335 kubeadm.go:602] duration metric: took 4m3.72973724s to restartPrimaryControlPlane
	W1205 06:34:04.661926   54335 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:34:04.661999   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:34:05.076526   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:34:05.090468   54335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:34:05.098831   54335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:34:05.098888   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:34:05.107168   54335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:34:05.107177   54335 kubeadm.go:158] found existing configuration files:
	
	I1205 06:34:05.107230   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:34:05.115256   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:34:05.115315   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:34:05.123163   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:34:05.130789   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:34:05.130850   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:34:05.138646   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:34:05.147024   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:34:05.147082   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:34:05.155378   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:34:05.163928   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:34:05.163985   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:34:05.171609   54335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:34:05.211033   54335 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:34:05.211109   54335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:34:05.279588   54335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:34:05.279653   54335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:34:05.279688   54335 kubeadm.go:319] OS: Linux
	I1205 06:34:05.279731   54335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:34:05.279778   54335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:34:05.279824   54335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:34:05.279876   54335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:34:05.279924   54335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:34:05.279971   54335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:34:05.280015   54335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:34:05.280062   54335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:34:05.280106   54335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:34:05.346565   54335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:34:05.346667   54335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:34:05.346756   54335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:34:05.352620   54335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:34:05.358148   54335 out.go:252]   - Generating certificates and keys ...
	I1205 06:34:05.358236   54335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:34:05.358307   54335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:34:05.358383   54335 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:34:05.358442   54335 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:34:05.358512   54335 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:34:05.358564   54335 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:34:05.358626   54335 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:34:05.358685   54335 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:34:05.358759   54335 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:34:05.358831   54335 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:34:05.358869   54335 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:34:05.358923   54335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:34:05.469895   54335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:34:05.573671   54335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:34:05.924291   54335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:34:06.081184   54335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:34:06.337744   54335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:34:06.338499   54335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:34:06.342999   54335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:34:06.346294   54335 out.go:252]   - Booting up control plane ...
	I1205 06:34:06.346403   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:34:06.346486   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:34:06.347115   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:34:06.367588   54335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:34:06.367869   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:34:06.375582   54335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:34:06.375840   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:34:06.375882   54335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:34:06.509639   54335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:34:06.509751   54335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:38:06.507887   54335 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288295s
	I1205 06:38:06.507910   54335 kubeadm.go:319] 
	I1205 06:38:06.508003   54335 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:38:06.508055   54335 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:38:06.508166   54335 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:38:06.508171   54335 kubeadm.go:319] 
	I1205 06:38:06.508290   54335 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:38:06.508326   54335 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:38:06.508363   54335 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:38:06.508367   54335 kubeadm.go:319] 
	I1205 06:38:06.511849   54335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:38:06.512286   54335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:38:06.512417   54335 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:38:06.512667   54335 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:38:06.512672   54335 kubeadm.go:319] 
	I1205 06:38:06.512746   54335 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 06:38:06.512894   54335 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288295s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:38:06.512983   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:38:06.919674   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:38:06.932797   54335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:38:06.932850   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:38:06.940628   54335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:38:06.940637   54335 kubeadm.go:158] found existing configuration files:
	
	I1205 06:38:06.940686   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:38:06.948311   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:38:06.948364   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:38:06.955656   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:38:06.963182   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:38:06.963234   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:38:06.970398   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:38:06.978024   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:38:06.978085   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:38:06.985044   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:38:06.992736   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:38:06.992788   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:38:07.000057   54335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:38:07.042188   54335 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:38:07.042482   54335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:38:07.116661   54335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:38:07.116719   54335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:38:07.116751   54335 kubeadm.go:319] OS: Linux
	I1205 06:38:07.116792   54335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:38:07.116836   54335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:38:07.116880   54335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:38:07.116923   54335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:38:07.116973   54335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:38:07.117018   54335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:38:07.117060   54335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:38:07.117104   54335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:38:07.117146   54335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:38:07.192664   54335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:38:07.192776   54335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:38:07.192871   54335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:38:07.201632   54335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:38:07.206982   54335 out.go:252]   - Generating certificates and keys ...
	I1205 06:38:07.207075   54335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:38:07.207145   54335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:38:07.207234   54335 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:38:07.207300   54335 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:38:07.207374   54335 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:38:07.207431   54335 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:38:07.207500   54335 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:38:07.207566   54335 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:38:07.207644   54335 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:38:07.207721   54335 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:38:07.207758   54335 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:38:07.207819   54335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:38:07.441757   54335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:38:07.738285   54335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:38:07.865941   54335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:38:08.382979   54335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:38:08.523706   54335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:38:08.524241   54335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:38:08.526890   54335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:38:08.530137   54335 out.go:252]   - Booting up control plane ...
	I1205 06:38:08.530240   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:38:08.530313   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:38:08.530379   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:38:08.552364   54335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:38:08.552467   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:38:08.559742   54335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:38:08.560021   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:38:08.560062   54335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:38:08.679099   54335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:38:08.679206   54335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:42:08.679850   54335 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117292s
	I1205 06:42:08.679871   54335 kubeadm.go:319] 
	I1205 06:42:08.679925   54335 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:42:08.679955   54335 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:42:08.680053   54335 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:42:08.680057   54335 kubeadm.go:319] 
	I1205 06:42:08.680155   54335 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:42:08.680184   54335 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:42:08.680212   54335 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:42:08.680215   54335 kubeadm.go:319] 
	I1205 06:42:08.683507   54335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:42:08.683930   54335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:42:08.684037   54335 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:42:08.684273   54335 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:42:08.684278   54335 kubeadm.go:319] 
	I1205 06:42:08.684346   54335 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:42:08.684393   54335 kubeadm.go:403] duration metric: took 12m7.791636767s to StartCluster
	I1205 06:42:08.684424   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:42:08.684483   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:42:08.708784   54335 cri.go:89] found id: ""
	I1205 06:42:08.708797   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.708804   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:42:08.708809   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:42:08.708865   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:42:08.733583   54335 cri.go:89] found id: ""
	I1205 06:42:08.733596   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.733603   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:42:08.733608   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:42:08.733670   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:42:08.762239   54335 cri.go:89] found id: ""
	I1205 06:42:08.762252   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.762259   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:42:08.762264   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:42:08.762320   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:42:08.785696   54335 cri.go:89] found id: ""
	I1205 06:42:08.785708   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.785715   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:42:08.785734   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:42:08.785790   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:42:08.810075   54335 cri.go:89] found id: ""
	I1205 06:42:08.810088   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.810096   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:42:08.810100   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:42:08.810158   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:42:08.834276   54335 cri.go:89] found id: ""
	I1205 06:42:08.834289   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.834296   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:42:08.834302   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:42:08.834358   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:42:08.858346   54335 cri.go:89] found id: ""
	I1205 06:42:08.858359   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.858366   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:42:08.858374   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:42:08.858383   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:42:08.913473   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:42:08.913490   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:42:08.924092   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:42:08.924108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:42:08.996046   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:42:08.988018   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.988849   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990388   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990679   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.992097   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:42:08.988018   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.988849   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990388   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990679   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.992097   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:42:08.996056   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:42:08.996066   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:42:09.060557   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:42:09.060575   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 06:42:09.093287   54335 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:42:09.093337   54335 out.go:285] * 
	W1205 06:42:09.093398   54335 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:42:09.093427   54335 out.go:285] * 
	W1205 06:42:09.096107   54335 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:42:09.099524   54335 out.go:203] 
	W1205 06:42:09.101056   54335 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:42:09.101108   54335 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:42:09.101134   54335 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:42:09.103029   54335 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145026672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145041688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145095498Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145105836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145128630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145145402Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145253415Z" level=info msg="runtime interface created"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145274027Z" level=info msg="created NRI interface"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145290905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145338700Z" level=info msg="Connect containerd service"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145722270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.146767640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165396800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165459980Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165493022Z" level=info msg="Start subscribing containerd event"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165540539Z" level=info msg="Start recovering state"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192890545Z" level=info msg="Start event monitor"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192942246Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192952470Z" level=info msg="Start streaming server"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192971760Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192981229Z" level=info msg="runtime interface starting up..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192987859Z" level=info msg="starting plugins..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192998526Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:29:59 functional-101526 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.194904270Z" level=info msg="containerd successfully booted in 0.069048s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:44:07.119776   23563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:07.120404   23563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:07.121940   23563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:07.122355   23563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:07.123810   23563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:44:07 up  1:26,  0 user,  load average: 0.63, 0.35, 0.40
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:44:03 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:04 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 05 06:44:04 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:04 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:04 functional-101526 kubelet[23368]: E1205 06:44:04.395188   23368 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:04 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:04 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:05 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 05 06:44:05 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:05 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:05 functional-101526 kubelet[23412]: E1205 06:44:05.144938   23412 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:05 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:05 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:05 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 05 06:44:05 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:05 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:05 functional-101526 kubelet[23458]: E1205 06:44:05.871650   23458 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:05 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:05 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:06 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 05 06:44:06 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:06 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:06 functional-101526 kubelet[23480]: E1205 06:44:06.657316   23480 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:06 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:06 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (396.694977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-101526 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-101526 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (57.768397ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-101526 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-101526 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-101526 describe po hello-node-connect: exit status 1 (81.560316ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-101526 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-101526 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-101526 logs -l app=hello-node-connect: exit status 1 (67.503486ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-101526 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-101526 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-101526 describe svc hello-node-connect: exit status 1 (61.363962ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-101526 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (315.507789ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-101526 cache reload                                                                                                                               │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ ssh     │ functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │ 05 Dec 25 06:29 UTC │
	│ kubectl │ functional-101526 kubectl -- --context functional-101526 get pods                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	│ start   │ -p functional-101526 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:29 UTC │                     │
	│ config  │ functional-101526 config unset cpus                                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cp      │ functional-101526 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ config  │ functional-101526 config get cpus                                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ config  │ functional-101526 config set cpus 2                                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ config  │ functional-101526 config get cpus                                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ config  │ functional-101526 config unset cpus                                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ config  │ functional-101526 config get cpus                                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ ssh     │ functional-101526 ssh -n functional-101526 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-101526 ssh echo hello                                                                                                                             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ cp      │ functional-101526 cp functional-101526:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3011102992/001/cp-test.txt │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-101526 ssh cat /etc/hostname                                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ ssh     │ functional-101526 ssh -n functional-101526 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ tunnel  │ functional-101526 tunnel --alsologtostderr                                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ tunnel  │ functional-101526 tunnel --alsologtostderr                                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ cp      │ functional-101526 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ tunnel  │ functional-101526 tunnel --alsologtostderr                                                                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │                     │
	│ ssh     │ functional-101526 ssh -n functional-101526 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:42 UTC │ 05 Dec 25 06:42 UTC │
	│ addons  │ functional-101526 addons list                                                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │ 05 Dec 25 06:43 UTC │
	│ addons  │ functional-101526 addons list -o json                                                                                                                        │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │ 05 Dec 25 06:43 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:29:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:29:56.087419   54335 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:29:56.087558   54335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:29:56.087562   54335 out.go:374] Setting ErrFile to fd 2...
	I1205 06:29:56.087566   54335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:29:56.087860   54335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:29:56.088207   54335 out.go:368] Setting JSON to false
	I1205 06:29:56.088971   54335 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4343,"bootTime":1764911853,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:29:56.089024   54335 start.go:143] virtualization:  
	I1205 06:29:56.093248   54335 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:29:56.096933   54335 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:29:56.097023   54335 notify.go:221] Checking for updates...
	I1205 06:29:56.100720   54335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:29:56.103681   54335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:29:56.106734   54335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:29:56.110260   54335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:29:56.113288   54335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:29:56.116882   54335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:29:56.116976   54335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:29:56.159923   54335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:29:56.160029   54335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:29:56.216532   54335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:29:56.206341969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:29:56.216625   54335 docker.go:319] overlay module found
	I1205 06:29:56.221471   54335 out.go:179] * Using the docker driver based on existing profile
	I1205 06:29:56.224343   54335 start.go:309] selected driver: docker
	I1205 06:29:56.224353   54335 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:29:56.224443   54335 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:29:56.224557   54335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:29:56.277319   54335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:29:56.268438767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:29:56.277800   54335 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:29:56.277821   54335 cni.go:84] Creating CNI manager for ""
	I1205 06:29:56.277884   54335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:29:56.278047   54335 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:29:56.282961   54335 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
	I1205 06:29:56.285729   54335 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:29:56.288624   54335 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:29:56.291591   54335 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:29:56.291657   54335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:29:56.310650   54335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:29:56.310660   54335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:29:56.348534   54335 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 06:29:56.550462   54335 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 06:29:56.550637   54335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
	I1205 06:29:56.550701   54335 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550781   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:29:56.550790   54335 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.262µs
	I1205 06:29:56.550802   54335 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:29:56.550812   54335 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550840   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:29:56.550844   54335 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 33.707µs
	I1205 06:29:56.550849   54335 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550857   54335 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550888   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:29:56.550892   54335 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.93µs
	I1205 06:29:56.550897   54335 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550906   54335 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550932   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:29:56.550937   54335 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:29:56.550939   54335 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.076µs
	I1205 06:29:56.550944   54335 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550952   54335 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550977   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:29:56.550965   54335 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.550981   54335 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.187µs
	I1205 06:29:56.550986   54335 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:29:56.550993   54335 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551016   54335 start.go:364] duration metric: took 28.546µs to acquireMachinesLock for "functional-101526"
	I1205 06:29:56.551022   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:29:56.551025   54335 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.24µs
	I1205 06:29:56.551035   54335 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:29:56.551034   54335 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:29:56.551039   54335 fix.go:54] fixHost starting: 
	I1205 06:29:56.551042   54335 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551065   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:29:56.551069   54335 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.16µs
	I1205 06:29:56.551073   54335 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:29:56.551081   54335 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:29:56.551103   54335 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:29:56.551106   54335 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 26.888µs
	I1205 06:29:56.551110   54335 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:29:56.551117   54335 cache.go:87] Successfully saved all images to host disk.
	I1205 06:29:56.551339   54335 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
	I1205 06:29:56.568156   54335 fix.go:112] recreateIfNeeded on functional-101526: state=Running err=<nil>
	W1205 06:29:56.568181   54335 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:29:56.571582   54335 out.go:252] * Updating the running docker "functional-101526" container ...
	I1205 06:29:56.571608   54335 machine.go:94] provisionDockerMachine start ...
	I1205 06:29:56.571688   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.588675   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.588995   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.589001   54335 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:29:56.736543   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:29:56.736557   54335 ubuntu.go:182] provisioning hostname "functional-101526"
	I1205 06:29:56.736615   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.754489   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.754781   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.754789   54335 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
	I1205 06:29:56.915291   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
	
	I1205 06:29:56.915355   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:56.933044   54335 main.go:143] libmachine: Using SSH client type: native
	I1205 06:29:56.933393   54335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1205 06:29:56.933407   54335 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:29:57.085183   54335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:29:57.085199   54335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 06:29:57.085221   54335 ubuntu.go:190] setting up certificates
	I1205 06:29:57.085229   54335 provision.go:84] configureAuth start
	I1205 06:29:57.085299   54335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:29:57.101349   54335 provision.go:143] copyHostCerts
	I1205 06:29:57.101410   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 06:29:57.101421   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 06:29:57.101492   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 06:29:57.101592   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 06:29:57.101596   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 06:29:57.101621   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 06:29:57.101678   54335 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 06:29:57.101680   54335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 06:29:57.101703   54335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 06:29:57.101750   54335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
	I1205 06:29:57.543303   54335 provision.go:177] copyRemoteCerts
	I1205 06:29:57.543357   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:29:57.543409   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.560691   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:57.666006   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:29:57.683446   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:29:57.700645   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:29:57.717863   54335 provision.go:87] duration metric: took 632.597506ms to configureAuth
	I1205 06:29:57.717880   54335 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:29:57.718064   54335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:29:57.718070   54335 machine.go:97] duration metric: took 1.146457487s to provisionDockerMachine
	I1205 06:29:57.718076   54335 start.go:293] postStartSetup for "functional-101526" (driver="docker")
	I1205 06:29:57.718086   54335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:29:57.718137   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:29:57.718174   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.735331   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:57.841496   54335 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:29:57.844702   54335 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:29:57.844721   54335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:29:57.844731   54335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 06:29:57.844783   54335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 06:29:57.844859   54335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 06:29:57.844934   54335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
	I1205 06:29:57.844984   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
	I1205 06:29:57.852337   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:29:57.869668   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
	I1205 06:29:57.887019   54335 start.go:296] duration metric: took 168.92936ms for postStartSetup
	I1205 06:29:57.887102   54335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:29:57.887149   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:57.903894   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.011756   54335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:29:58.016900   54335 fix.go:56] duration metric: took 1.465853892s for fixHost
	I1205 06:29:58.016919   54335 start.go:83] releasing machines lock for "functional-101526", held for 1.465896107s
	I1205 06:29:58.016988   54335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
	I1205 06:29:58.035591   54335 ssh_runner.go:195] Run: cat /version.json
	I1205 06:29:58.035642   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:58.035909   54335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:29:58.035957   54335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
	I1205 06:29:58.053529   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.058886   54335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
	I1205 06:29:58.156784   54335 ssh_runner.go:195] Run: systemctl --version
	I1205 06:29:58.245777   54335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:29:58.249918   54335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:29:58.249974   54335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:29:58.257133   54335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:29:58.257146   54335 start.go:496] detecting cgroup driver to use...
	I1205 06:29:58.257190   54335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:29:58.257233   54335 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 06:29:58.273979   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 06:29:58.288748   54335 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:29:58.288814   54335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:29:58.305248   54335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:29:58.319216   54335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:29:58.440307   54335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:29:58.559446   54335 docker.go:234] disabling docker service ...
	I1205 06:29:58.559504   54335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:29:58.574399   54335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:29:58.587407   54335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:29:58.701676   54335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:29:58.808689   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:29:58.821276   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:29:58.836401   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 06:29:58.846421   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 06:29:58.855275   54335 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 06:29:58.855341   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 06:29:58.864125   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:29:58.872649   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 06:29:58.881354   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 06:29:58.890354   54335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:29:58.898337   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 06:29:58.907106   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 06:29:58.915882   54335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 06:29:58.924414   54335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:29:58.931809   54335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:29:58.939114   54335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:29:59.065680   54335 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 06:29:59.195981   54335 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 06:29:59.196040   54335 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 06:29:59.199987   54335 start.go:564] Will wait 60s for crictl version
	I1205 06:29:59.200039   54335 ssh_runner.go:195] Run: which crictl
	I1205 06:29:59.203560   54335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:29:59.235649   54335 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 06:29:59.235710   54335 ssh_runner.go:195] Run: containerd --version
	I1205 06:29:59.255405   54335 ssh_runner.go:195] Run: containerd --version
	I1205 06:29:59.283346   54335 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 06:29:59.286262   54335 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:29:59.301845   54335 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:29:59.308610   54335 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:29:59.311441   54335 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:29:59.311553   54335 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 06:29:59.311627   54335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:29:59.336067   54335 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 06:29:59.336079   54335 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:29:59.336085   54335 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1205 06:29:59.336175   54335 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:29:59.336232   54335 ssh_runner.go:195] Run: sudo crictl info
	I1205 06:29:59.363378   54335 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:29:59.363395   54335 cni.go:84] Creating CNI manager for ""
	I1205 06:29:59.363403   54335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:29:59.363415   54335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:29:59.363436   54335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:29:59.363559   54335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-101526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:29:59.363624   54335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:29:59.371046   54335 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:29:59.371108   54335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:29:59.378354   54335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 06:29:59.390503   54335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:29:59.402745   54335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1205 06:29:59.414910   54335 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:29:59.418646   54335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:29:59.529578   54335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:29:59.846402   54335 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
	I1205 06:29:59.846413   54335 certs.go:195] generating shared ca certs ...
	I1205 06:29:59.846426   54335 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:29:59.846569   54335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 06:29:59.846610   54335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 06:29:59.846616   54335 certs.go:257] generating profile certs ...
	I1205 06:29:59.846728   54335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
	I1205 06:29:59.846770   54335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
	I1205 06:29:59.846811   54335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
	I1205 06:29:59.846921   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 06:29:59.846956   54335 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 06:29:59.846962   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:29:59.846989   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:29:59.847014   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:29:59.847036   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 06:29:59.847085   54335 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 06:29:59.847736   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:29:59.867939   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:29:59.888562   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:29:59.907283   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:29:59.927879   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:29:59.944224   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:29:59.960459   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:29:59.979078   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:29:59.996293   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 06:30:00.066962   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 06:30:00.118991   54335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:30:00.185989   54335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:30:00.235503   54335 ssh_runner.go:195] Run: openssl version
	I1205 06:30:00.255104   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.270140   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 06:30:00.290181   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.295705   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.295771   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 06:30:00.399762   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:30:00.412238   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.433387   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 06:30:00.449934   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.455249   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.455319   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 06:30:00.517764   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:30:00.530824   54335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.546605   54335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:30:00.555560   54335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.561005   54335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.561068   54335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:30:00.611790   54335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:30:00.623580   54335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:30:00.628736   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:30:00.674439   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:30:00.717432   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:30:00.760669   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:30:00.802949   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:30:00.845730   54335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:30:00.892769   54335 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:30:00.892871   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 06:30:00.892957   54335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:30:00.923464   54335 cri.go:89] found id: ""
	I1205 06:30:00.923530   54335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:30:00.932111   54335 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:30:00.932122   54335 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:30:00.932182   54335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:30:00.940210   54335 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:00.940808   54335 kubeconfig.go:125] found "functional-101526" server: "https://192.168.49.2:8441"
	I1205 06:30:00.942221   54335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:30:00.951085   54335 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:15:26.552544518 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:29:59.409281720 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:30:00.951105   54335 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:30:00.951116   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1205 06:30:00.951177   54335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:30:00.983535   54335 cri.go:89] found id: ""
	I1205 06:30:00.983600   54335 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:30:00.999793   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:30:01.011193   54335 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  5 06:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5628 Dec  5 06:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  5 06:19 /etc/kubernetes/scheduler.conf
	
	I1205 06:30:01.011277   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:30:01.020421   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:30:01.029014   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.029083   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:30:01.037495   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:30:01.045879   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.045943   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:30:01.054299   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:30:01.063067   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:30:01.063128   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:30:01.071319   54335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:30:01.080035   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:01.126871   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.550689   54335 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.423791138s)
	I1205 06:30:02.550750   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.758304   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.826924   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:30:02.872904   54335 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:30:02.872975   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:03.373516   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:03.873269   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:04.373873   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:04.873262   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:05.374099   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:05.873790   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:06.374013   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:06.873783   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:07.373319   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:07.874006   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:08.374019   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:08.873288   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:09.373772   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:09.873842   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:10.373300   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:10.874107   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:11.373177   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:11.873355   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:12.373736   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:12.873308   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:13.374049   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:13.873112   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:14.374044   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:14.873826   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:15.373350   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:15.873570   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:16.373205   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:16.873133   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:17.373949   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:17.873343   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:18.373376   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:18.873437   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:19.373102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:19.874076   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:20.373694   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:20.873676   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:21.373293   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:21.873915   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:22.373279   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:22.873197   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:23.373182   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:23.873041   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:24.373194   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:24.873913   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:25.373334   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:25.874011   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:26.373620   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:26.873898   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:27.373174   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:27.874034   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:28.373282   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:28.873430   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:29.374096   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:29.873271   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:30.373863   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:30.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:31.373041   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:31.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:32.373311   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:32.873944   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:33.373660   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:33.873399   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:34.373269   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:34.873154   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:35.374056   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:35.873925   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:36.373314   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:36.873816   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:37.373079   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:37.873173   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:38.373973   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:38.873278   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:39.373892   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:39.873395   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:40.373274   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:40.874009   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:41.374054   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:41.873330   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:42.373986   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:42.873130   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:43.373582   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:43.873189   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:44.373894   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:44.873102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:45.373202   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:45.873349   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:46.373273   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:46.873147   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:47.374102   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:47.873856   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:48.374059   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:48.873728   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:49.373337   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:49.873152   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:50.373886   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:50.873110   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:51.373740   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:51.873807   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:52.373287   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:52.873175   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:53.373983   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:53.873898   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:54.374080   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:54.873113   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:55.373274   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:55.874004   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:56.373964   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:56.873273   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:57.373188   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:57.873857   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:58.373297   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:58.873189   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:59.373797   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:30:59.874078   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:00.374118   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:00.873073   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:01.373094   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:01.873990   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:02.373960   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:02.873246   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:02.873355   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:02.899119   54335 cri.go:89] found id: ""
	I1205 06:31:02.899133   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.899140   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:02.899145   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:02.899201   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:02.926015   54335 cri.go:89] found id: ""
	I1205 06:31:02.926028   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.926036   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:02.926041   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:02.926100   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:02.950775   54335 cri.go:89] found id: ""
	I1205 06:31:02.950788   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.950795   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:02.950800   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:02.950859   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:02.978268   54335 cri.go:89] found id: ""
	I1205 06:31:02.978282   54335 logs.go:282] 0 containers: []
	W1205 06:31:02.978289   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:02.978294   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:02.978352   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:03.015482   54335 cri.go:89] found id: ""
	I1205 06:31:03.015497   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.015506   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:03.015511   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:03.015575   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:03.041353   54335 cri.go:89] found id: ""
	I1205 06:31:03.041366   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.041373   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:03.041379   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:03.041463   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:03.066457   54335 cri.go:89] found id: ""
	I1205 06:31:03.066472   54335 logs.go:282] 0 containers: []
	W1205 06:31:03.066479   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:03.066487   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:03.066502   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:03.121069   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:03.121087   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:03.131794   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:03.131809   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:03.195836   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:03.188092   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.188541   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190139   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190560   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.191959   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:03.188092   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.188541   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190139   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.190560   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:03.191959   11306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:03.195847   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:03.195859   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:03.258177   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:03.258195   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:05.785947   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:05.795932   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:05.795992   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:05.822996   54335 cri.go:89] found id: ""
	I1205 06:31:05.823010   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.823017   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:05.823022   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:05.823079   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:05.851647   54335 cri.go:89] found id: ""
	I1205 06:31:05.851660   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.851667   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:05.851671   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:05.851728   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:05.888840   54335 cri.go:89] found id: ""
	I1205 06:31:05.888853   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.888860   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:05.888865   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:05.888923   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:05.916749   54335 cri.go:89] found id: ""
	I1205 06:31:05.916763   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.916771   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:05.916776   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:05.916838   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:05.941885   54335 cri.go:89] found id: ""
	I1205 06:31:05.941898   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.941905   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:05.941910   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:05.941970   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:05.967174   54335 cri.go:89] found id: ""
	I1205 06:31:05.967188   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.967195   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:05.967202   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:05.967259   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:05.991608   54335 cri.go:89] found id: ""
	I1205 06:31:05.991622   54335 logs.go:282] 0 containers: []
	W1205 06:31:05.991629   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:05.991637   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:05.991647   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:06.048885   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:06.048907   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:06.060386   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:06.060403   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:06.139830   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:06.132213   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.132764   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134526   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134986   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.136558   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:06.132213   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.132764   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134526   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.134986   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:06.136558   11414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:06.139840   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:06.139853   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:06.202288   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:06.202307   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:08.730029   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:08.740211   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:08.740272   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:08.763977   54335 cri.go:89] found id: ""
	I1205 06:31:08.763991   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.763998   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:08.764004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:08.764064   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:08.788621   54335 cri.go:89] found id: ""
	I1205 06:31:08.788635   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.788642   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:08.788647   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:08.788702   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:08.813441   54335 cri.go:89] found id: ""
	I1205 06:31:08.813454   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.813461   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:08.813466   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:08.813522   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:08.837930   54335 cri.go:89] found id: ""
	I1205 06:31:08.837944   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.837951   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:08.837956   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:08.838014   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:08.865898   54335 cri.go:89] found id: ""
	I1205 06:31:08.865911   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.865918   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:08.865923   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:08.865985   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:08.893385   54335 cri.go:89] found id: ""
	I1205 06:31:08.893410   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.893417   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:08.893422   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:08.893488   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:08.922394   54335 cri.go:89] found id: ""
	I1205 06:31:08.922407   54335 logs.go:282] 0 containers: []
	W1205 06:31:08.922414   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:08.922422   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:08.922432   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:08.977895   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:08.977913   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:08.989011   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:08.989025   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:09.057444   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:09.048642   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.049814   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.051664   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.052030   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.053581   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:09.048642   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.049814   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.051664   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.052030   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:09.053581   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:09.057456   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:09.057471   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:09.119855   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:09.119875   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:11.657869   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:11.668122   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:11.668185   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:11.692170   54335 cri.go:89] found id: ""
	I1205 06:31:11.692183   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.692190   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:11.692195   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:11.692253   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:11.716930   54335 cri.go:89] found id: ""
	I1205 06:31:11.716945   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.716951   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:11.716962   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:11.717031   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:11.741795   54335 cri.go:89] found id: ""
	I1205 06:31:11.741808   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.741815   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:11.741820   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:11.741881   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:11.766411   54335 cri.go:89] found id: ""
	I1205 06:31:11.766425   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.766431   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:11.766437   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:11.766495   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:11.791195   54335 cri.go:89] found id: ""
	I1205 06:31:11.791209   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.791216   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:11.791221   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:11.791280   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:11.819219   54335 cri.go:89] found id: ""
	I1205 06:31:11.819233   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.819245   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:11.819251   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:11.819312   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:11.851464   54335 cri.go:89] found id: ""
	I1205 06:31:11.851478   54335 logs.go:282] 0 containers: []
	W1205 06:31:11.851491   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:11.851498   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:11.851508   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:11.931606   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:11.931625   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:11.960389   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:11.960407   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:12.021080   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:12.021102   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:12.032273   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:12.032290   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:12.097324   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:12.088793   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.089496   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091075   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091390   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.093729   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:12.088793   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.089496   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091075   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.091390   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:12.093729   11637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:14.597581   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:14.607724   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:14.607782   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:14.632907   54335 cri.go:89] found id: ""
	I1205 06:31:14.632921   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.632928   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:14.632933   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:14.632989   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:14.657884   54335 cri.go:89] found id: ""
	I1205 06:31:14.657898   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.657905   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:14.657910   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:14.657965   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:14.681364   54335 cri.go:89] found id: ""
	I1205 06:31:14.681377   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.681384   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:14.681389   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:14.681462   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:14.709552   54335 cri.go:89] found id: ""
	I1205 06:31:14.709566   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.709573   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:14.709578   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:14.709642   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:14.733105   54335 cri.go:89] found id: ""
	I1205 06:31:14.733118   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.733125   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:14.733130   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:14.733217   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:14.759861   54335 cri.go:89] found id: ""
	I1205 06:31:14.759874   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.759881   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:14.759887   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:14.759943   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:14.785666   54335 cri.go:89] found id: ""
	I1205 06:31:14.785679   54335 logs.go:282] 0 containers: []
	W1205 06:31:14.785686   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:14.785693   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:14.785706   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:14.854767   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:14.841994   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.842592   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844142   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844598   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.846116   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:14.841994   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.842592   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844142   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.844598   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:14.846116   11718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:14.854785   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:14.854795   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:14.922701   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:14.922719   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:14.953207   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:14.953223   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:15.010462   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:15.010484   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:17.529572   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:17.539788   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:17.539847   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:17.563677   54335 cri.go:89] found id: ""
	I1205 06:31:17.563691   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.563698   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:17.563703   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:17.563774   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:17.593628   54335 cri.go:89] found id: ""
	I1205 06:31:17.593642   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.593649   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:17.593654   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:17.593720   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:17.619071   54335 cri.go:89] found id: ""
	I1205 06:31:17.619084   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.619092   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:17.619097   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:17.619153   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:17.642944   54335 cri.go:89] found id: ""
	I1205 06:31:17.642958   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.642964   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:17.642970   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:17.643037   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:17.667755   54335 cri.go:89] found id: ""
	I1205 06:31:17.667768   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.667775   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:17.667780   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:17.667836   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:17.691060   54335 cri.go:89] found id: ""
	I1205 06:31:17.691073   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.691080   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:17.691085   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:17.691152   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:17.714527   54335 cri.go:89] found id: ""
	I1205 06:31:17.714540   54335 logs.go:282] 0 containers: []
	W1205 06:31:17.714547   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:17.714554   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:17.714564   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:17.777347   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:17.777365   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:17.804848   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:17.804862   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:17.866054   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:17.866072   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:17.877290   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:17.877305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:17.944157   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:17.936780   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.937336   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939068   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939357   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.940820   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:17.936780   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.937336   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939068   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.939357   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:17.940820   11855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:20.445814   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:20.455929   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:20.456007   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:20.480265   54335 cri.go:89] found id: ""
	I1205 06:31:20.480280   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.480287   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:20.480294   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:20.480371   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:20.504045   54335 cri.go:89] found id: ""
	I1205 06:31:20.504059   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.504065   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:20.504070   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:20.504128   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:20.528811   54335 cri.go:89] found id: ""
	I1205 06:31:20.528824   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.528831   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:20.528836   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:20.528893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:20.553249   54335 cri.go:89] found id: ""
	I1205 06:31:20.553272   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.553279   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:20.553284   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:20.553358   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:20.577735   54335 cri.go:89] found id: ""
	I1205 06:31:20.577767   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.577775   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:20.577780   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:20.577839   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:20.603821   54335 cri.go:89] found id: ""
	I1205 06:31:20.603835   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.603852   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:20.603858   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:20.603955   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:20.632954   54335 cri.go:89] found id: ""
	I1205 06:31:20.632985   54335 logs.go:282] 0 containers: []
	W1205 06:31:20.632992   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:20.633000   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:20.633010   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:20.688822   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:20.688840   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:20.700167   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:20.700183   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:20.766199   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:20.757515   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.758089   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760039   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760823   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.762597   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:20.757515   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.758089   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760039   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.760823   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:20.762597   11941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:20.766209   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:20.766219   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:20.829413   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:20.829439   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:23.369036   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:23.379250   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:23.379308   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:23.407254   54335 cri.go:89] found id: ""
	I1205 06:31:23.407268   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.407275   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:23.407280   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:23.407335   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:23.431989   54335 cri.go:89] found id: ""
	I1205 06:31:23.432002   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.432009   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:23.432014   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:23.432079   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:23.467269   54335 cri.go:89] found id: ""
	I1205 06:31:23.467287   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.467293   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:23.467299   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:23.467362   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:23.490943   54335 cri.go:89] found id: ""
	I1205 06:31:23.490956   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.490962   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:23.490968   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:23.491025   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:23.519217   54335 cri.go:89] found id: ""
	I1205 06:31:23.519232   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.519239   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:23.519244   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:23.519306   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:23.543863   54335 cri.go:89] found id: ""
	I1205 06:31:23.543877   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.543883   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:23.543888   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:23.543956   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:23.567865   54335 cri.go:89] found id: ""
	I1205 06:31:23.567878   54335 logs.go:282] 0 containers: []
	W1205 06:31:23.567897   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:23.567905   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:23.567914   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:23.632509   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:23.632529   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:23.662290   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:23.662305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:23.719254   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:23.719272   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:23.730331   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:23.730346   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:23.792133   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:23.784315   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.784953   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.786670   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.787328   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.788816   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:23.784315   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.784953   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.786670   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.787328   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:23.788816   12059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:26.293128   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:26.304108   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:26.304168   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:26.331011   54335 cri.go:89] found id: ""
	I1205 06:31:26.331024   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.331031   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:26.331040   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:26.331097   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:26.358547   54335 cri.go:89] found id: ""
	I1205 06:31:26.358562   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.358569   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:26.358573   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:26.358630   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:26.387125   54335 cri.go:89] found id: ""
	I1205 06:31:26.387139   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.387146   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:26.387151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:26.387210   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:26.412329   54335 cri.go:89] found id: ""
	I1205 06:31:26.412343   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.412350   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:26.412355   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:26.412433   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:26.437117   54335 cri.go:89] found id: ""
	I1205 06:31:26.437130   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.437138   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:26.437142   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:26.437253   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:26.465767   54335 cri.go:89] found id: ""
	I1205 06:31:26.465779   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.465787   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:26.465792   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:26.465855   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:26.489618   54335 cri.go:89] found id: ""
	I1205 06:31:26.489636   54335 logs.go:282] 0 containers: []
	W1205 06:31:26.489643   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:26.489651   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:26.489661   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:26.516285   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:26.516307   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:26.571623   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:26.571639   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:26.582532   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:26.582547   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:26.648629   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:26.640184   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.640930   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.642740   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.643413   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.644996   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:26.640184   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.640930   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.642740   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.643413   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:26.644996   12164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:26.648640   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:26.648652   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:29.213295   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:29.223226   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:29.223291   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:29.248501   54335 cri.go:89] found id: ""
	I1205 06:31:29.248514   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.248521   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:29.248526   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:29.248585   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:29.273551   54335 cri.go:89] found id: ""
	I1205 06:31:29.273564   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.273571   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:29.273576   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:29.273633   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:29.297959   54335 cri.go:89] found id: ""
	I1205 06:31:29.297972   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.297979   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:29.297985   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:29.298043   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:29.322784   54335 cri.go:89] found id: ""
	I1205 06:31:29.322798   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.322809   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:29.322814   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:29.322870   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:29.351067   54335 cri.go:89] found id: ""
	I1205 06:31:29.351080   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.351087   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:29.351092   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:29.351163   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:29.378768   54335 cri.go:89] found id: ""
	I1205 06:31:29.378782   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.378789   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:29.378794   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:29.378854   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:29.403528   54335 cri.go:89] found id: ""
	I1205 06:31:29.403542   54335 logs.go:282] 0 containers: []
	W1205 06:31:29.403549   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:29.403556   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:29.403567   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:29.471248   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:29.463937   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.464521   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466184   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466622   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.467929   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:29.463937   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.464521   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466184   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.466622   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:29.467929   12253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:29.471259   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:29.471269   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:29.533062   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:29.533080   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:29.564293   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:29.564323   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:29.619083   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:29.619101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:32.130510   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:32.143539   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:32.143642   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:32.171414   54335 cri.go:89] found id: ""
	I1205 06:31:32.171428   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.171436   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:32.171441   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:32.171499   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:32.196112   54335 cri.go:89] found id: ""
	I1205 06:31:32.196125   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.196132   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:32.196137   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:32.196195   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:32.223236   54335 cri.go:89] found id: ""
	I1205 06:31:32.223250   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.223257   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:32.223261   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:32.223317   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:32.247226   54335 cri.go:89] found id: ""
	I1205 06:31:32.247240   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.247247   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:32.247252   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:32.247308   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:32.275892   54335 cri.go:89] found id: ""
	I1205 06:31:32.275905   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.275912   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:32.275918   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:32.275975   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:32.304746   54335 cri.go:89] found id: ""
	I1205 06:31:32.304759   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.304767   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:32.304772   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:32.304831   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:32.329065   54335 cri.go:89] found id: ""
	I1205 06:31:32.329078   54335 logs.go:282] 0 containers: []
	W1205 06:31:32.329085   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:32.329092   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:32.329101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:32.384331   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:32.384349   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:32.395108   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:32.395123   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:32.457079   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:32.449726   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.450348   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.451857   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.452270   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.453749   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:32.449726   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.450348   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.451857   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.452270   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:32.453749   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:32.457097   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:32.457108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:32.520612   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:32.520631   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:35.049835   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:35.059785   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:35.059850   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:35.084594   54335 cri.go:89] found id: ""
	I1205 06:31:35.084610   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.084617   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:35.084624   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:35.084682   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:35.119519   54335 cri.go:89] found id: ""
	I1205 06:31:35.119533   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.119553   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:35.119559   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:35.119625   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:35.146284   54335 cri.go:89] found id: ""
	I1205 06:31:35.146298   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.146305   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:35.146310   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:35.146370   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:35.174570   54335 cri.go:89] found id: ""
	I1205 06:31:35.174583   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.174590   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:35.174596   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:35.174653   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:35.198347   54335 cri.go:89] found id: ""
	I1205 06:31:35.198361   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.198368   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:35.198374   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:35.198430   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:35.226196   54335 cri.go:89] found id: ""
	I1205 06:31:35.226210   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.226216   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:35.226222   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:35.226281   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:35.250876   54335 cri.go:89] found id: ""
	I1205 06:31:35.250889   54335 logs.go:282] 0 containers: []
	W1205 06:31:35.250897   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:35.250904   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:35.250913   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:35.304930   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:35.304948   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:35.315954   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:35.315970   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:35.377099   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:35.369290   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.369826   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371500   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371964   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.373533   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:35.369290   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.369826   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371500   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.371964   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:35.373533   12470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:35.377109   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:35.377120   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:35.437784   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:35.437801   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:37.968228   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:37.977892   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:37.977968   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:38.010142   54335 cri.go:89] found id: ""
	I1205 06:31:38.010158   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.010173   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:38.010180   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:38.010249   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:38.048020   54335 cri.go:89] found id: ""
	I1205 06:31:38.048034   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.048041   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:38.048047   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:38.048112   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:38.077977   54335 cri.go:89] found id: ""
	I1205 06:31:38.077991   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.077999   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:38.078004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:38.078068   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:38.115520   54335 cri.go:89] found id: ""
	I1205 06:31:38.115534   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.115541   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:38.115546   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:38.115618   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:38.141580   54335 cri.go:89] found id: ""
	I1205 06:31:38.141593   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.141613   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:38.141618   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:38.141673   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:38.167473   54335 cri.go:89] found id: ""
	I1205 06:31:38.167487   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.167493   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:38.167499   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:38.167565   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:38.190856   54335 cri.go:89] found id: ""
	I1205 06:31:38.190869   54335 logs.go:282] 0 containers: []
	W1205 06:31:38.190876   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:38.190884   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:38.190894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:38.245488   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:38.245505   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:38.255819   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:38.255834   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:38.319935   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:38.311836   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.312540   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314137   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314745   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.316388   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:38.311836   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.312540   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314137   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.314745   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:38.316388   12574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:38.319952   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:38.319963   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:38.381733   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:38.381750   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:40.911397   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:40.921257   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:40.921321   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:40.947604   54335 cri.go:89] found id: ""
	I1205 06:31:40.947618   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.947625   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:40.947630   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:40.947694   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:40.973136   54335 cri.go:89] found id: ""
	I1205 06:31:40.973148   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.973186   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:40.973191   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:40.973256   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:40.996412   54335 cri.go:89] found id: ""
	I1205 06:31:40.996425   54335 logs.go:282] 0 containers: []
	W1205 06:31:40.996432   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:40.996437   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:40.996497   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:41.024001   54335 cri.go:89] found id: ""
	I1205 06:31:41.024015   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.024022   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:41.024028   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:41.024086   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:41.051496   54335 cri.go:89] found id: ""
	I1205 06:31:41.051510   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.051517   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:41.051522   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:41.051582   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:41.080451   54335 cri.go:89] found id: ""
	I1205 06:31:41.080464   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.080471   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:41.080476   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:41.080533   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:41.117388   54335 cri.go:89] found id: ""
	I1205 06:31:41.117401   54335 logs.go:282] 0 containers: []
	W1205 06:31:41.117409   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:41.117416   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:41.117426   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:41.182349   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:41.182368   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:41.193093   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:41.193108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:41.254159   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:41.246911   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.247523   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249025   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249503   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.250928   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:41.246911   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.247523   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249025   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.249503   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:41.250928   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:41.254170   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:41.254181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:41.321082   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:41.321101   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:43.851964   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:43.862187   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:43.862247   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:43.886923   54335 cri.go:89] found id: ""
	I1205 06:31:43.886937   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.886944   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:43.886950   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:43.887009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:43.912496   54335 cri.go:89] found id: ""
	I1205 06:31:43.912509   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.912516   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:43.912521   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:43.912579   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:43.936914   54335 cri.go:89] found id: ""
	I1205 06:31:43.936928   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.936938   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:43.936943   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:43.937000   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:43.961282   54335 cri.go:89] found id: ""
	I1205 06:31:43.961297   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.961304   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:43.961314   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:43.961378   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:43.988380   54335 cri.go:89] found id: ""
	I1205 06:31:43.988394   54335 logs.go:282] 0 containers: []
	W1205 06:31:43.988401   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:43.988406   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:43.988464   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:44.020415   54335 cri.go:89] found id: ""
	I1205 06:31:44.020429   54335 logs.go:282] 0 containers: []
	W1205 06:31:44.020437   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:44.020442   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:44.020501   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:44.045852   54335 cri.go:89] found id: ""
	I1205 06:31:44.045866   54335 logs.go:282] 0 containers: []
	W1205 06:31:44.045873   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:44.045881   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:44.045894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:44.056666   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:44.056681   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:44.135868   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:44.126530   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.127194   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129371   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129954   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.131639   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:44.126530   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.127194   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129371   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.129954   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:44.131639   12777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:44.135879   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:44.135890   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:44.204481   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:44.204500   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:44.232917   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:44.232935   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:46.789779   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:46.799818   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:46.799875   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:46.823971   54335 cri.go:89] found id: ""
	I1205 06:31:46.823985   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.823992   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:46.823998   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:46.824061   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:46.848342   54335 cri.go:89] found id: ""
	I1205 06:31:46.848356   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.848363   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:46.848368   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:46.848425   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:46.873786   54335 cri.go:89] found id: ""
	I1205 06:31:46.873800   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.873807   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:46.873812   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:46.873873   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:46.903465   54335 cri.go:89] found id: ""
	I1205 06:31:46.903479   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.903487   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:46.903492   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:46.903549   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:46.932432   54335 cri.go:89] found id: ""
	I1205 06:31:46.932446   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.932453   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:46.932458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:46.932518   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:46.957671   54335 cri.go:89] found id: ""
	I1205 06:31:46.957684   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.957692   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:46.957697   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:46.957760   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:46.983050   54335 cri.go:89] found id: ""
	I1205 06:31:46.983063   54335 logs.go:282] 0 containers: []
	W1205 06:31:46.983077   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:46.983085   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:46.983095   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:47.042088   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:47.042105   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:47.053482   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:47.053498   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:47.131108   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:47.122748   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.123420   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125206   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125739   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.127319   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:47.122748   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.123420   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125206   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.125739   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:47.127319   12885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:47.131117   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:47.131128   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:47.204434   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:47.204452   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:49.735640   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:49.745807   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:49.745868   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:49.770984   54335 cri.go:89] found id: ""
	I1205 06:31:49.770997   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.771004   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:49.771009   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:49.771072   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:49.795524   54335 cri.go:89] found id: ""
	I1205 06:31:49.795538   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.795545   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:49.795550   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:49.795605   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:49.820126   54335 cri.go:89] found id: ""
	I1205 06:31:49.820140   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.820147   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:49.820152   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:49.820209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:49.844379   54335 cri.go:89] found id: ""
	I1205 06:31:49.844392   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.844401   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:49.844408   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:49.844465   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:49.871132   54335 cri.go:89] found id: ""
	I1205 06:31:49.871144   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.871152   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:49.871157   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:49.871214   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:49.894867   54335 cri.go:89] found id: ""
	I1205 06:31:49.894880   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.894887   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:49.894893   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:49.894949   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:49.920144   54335 cri.go:89] found id: ""
	I1205 06:31:49.920157   54335 logs.go:282] 0 containers: []
	W1205 06:31:49.920164   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:49.920171   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:49.920181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:49.979573   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:49.979595   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:49.990405   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:49.990420   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:50.061353   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:50.052917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.053917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.055577   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.056112   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.057715   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:50.052917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.053917   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.055577   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.056112   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:50.057715   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:50.061364   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:50.061376   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:50.139097   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:50.139131   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:52.678459   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:52.688604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:52.688663   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:52.712686   54335 cri.go:89] found id: ""
	I1205 06:31:52.712700   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.712707   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:52.712712   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:52.712774   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:52.746954   54335 cri.go:89] found id: ""
	I1205 06:31:52.746968   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.746975   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:52.746980   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:52.747039   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:52.771325   54335 cri.go:89] found id: ""
	I1205 06:31:52.771338   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.771345   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:52.771350   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:52.771406   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:52.795882   54335 cri.go:89] found id: ""
	I1205 06:31:52.795896   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.795902   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:52.795908   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:52.795965   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:52.820064   54335 cri.go:89] found id: ""
	I1205 06:31:52.820079   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.820085   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:52.820090   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:52.820150   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:52.848297   54335 cri.go:89] found id: ""
	I1205 06:31:52.848311   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.848317   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:52.848323   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:52.848381   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:52.876041   54335 cri.go:89] found id: ""
	I1205 06:31:52.876055   54335 logs.go:282] 0 containers: []
	W1205 06:31:52.876062   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:52.876069   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:52.876079   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:52.931790   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:52.931811   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:52.942929   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:52.942944   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:53.007664   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:52.997863   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:52.998579   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.000280   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.001013   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.002974   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:52.997863   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:52.998579   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.000280   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.001013   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:53.002974   13096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:53.007675   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:53.007686   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:53.073695   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:53.073712   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:55.610763   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:55.620883   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:55.620945   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:55.645677   54335 cri.go:89] found id: ""
	I1205 06:31:55.645691   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.645698   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:55.645703   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:55.645763   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:55.670962   54335 cri.go:89] found id: ""
	I1205 06:31:55.670975   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.670982   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:55.670987   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:55.671045   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:55.695354   54335 cri.go:89] found id: ""
	I1205 06:31:55.695367   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.695374   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:55.695379   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:55.695447   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:55.719264   54335 cri.go:89] found id: ""
	I1205 06:31:55.719277   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.719284   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:55.719290   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:55.719347   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:55.742928   54335 cri.go:89] found id: ""
	I1205 06:31:55.742941   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.742948   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:55.742954   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:55.743013   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:55.766643   54335 cri.go:89] found id: ""
	I1205 06:31:55.766657   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.766664   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:55.766672   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:55.766729   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:55.789985   54335 cri.go:89] found id: ""
	I1205 06:31:55.789999   54335 logs.go:282] 0 containers: []
	W1205 06:31:55.790005   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:55.790051   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:55.790062   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:31:55.817984   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:55.818000   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:55.874068   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:55.874085   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:55.885873   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:55.885888   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:55.950375   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:55.941637   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.942508   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.944636   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.945456   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.946344   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:55.941637   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.942508   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.944636   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.945456   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:55.946344   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:55.950385   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:55.950396   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:58.513319   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:31:58.523187   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:31:58.523244   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:31:58.546403   54335 cri.go:89] found id: ""
	I1205 06:31:58.546416   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.546423   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:31:58.546429   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:31:58.546486   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:31:58.570005   54335 cri.go:89] found id: ""
	I1205 06:31:58.570019   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.570035   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:31:58.570040   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:31:58.570098   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:31:58.594200   54335 cri.go:89] found id: ""
	I1205 06:31:58.594214   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.594220   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:31:58.594225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:31:58.594284   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:31:58.618421   54335 cri.go:89] found id: ""
	I1205 06:31:58.618434   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.618440   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:31:58.618445   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:31:58.618499   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:31:58.642656   54335 cri.go:89] found id: ""
	I1205 06:31:58.642669   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.642676   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:31:58.642682   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:31:58.642742   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:31:58.667838   54335 cri.go:89] found id: ""
	I1205 06:31:58.667850   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.667858   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:31:58.667863   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:31:58.667933   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:31:58.695900   54335 cri.go:89] found id: ""
	I1205 06:31:58.695914   54335 logs.go:282] 0 containers: []
	W1205 06:31:58.695921   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:31:58.695929   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:31:58.695939   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:31:58.751191   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:31:58.751209   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:31:58.761861   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:31:58.761882   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:31:58.829503   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:31:58.822607   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.823005   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.824699   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.825076   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.826213   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:31:58.822607   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.823005   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.824699   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.825076   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:31:58.826213   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:31:58.829513   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:31:58.829524   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:31:58.892286   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:31:58.892304   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:01.420326   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:01.430350   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:01.430415   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:01.455307   54335 cri.go:89] found id: ""
	I1205 06:32:01.455320   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.455328   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:01.455333   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:01.455388   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:01.479758   54335 cri.go:89] found id: ""
	I1205 06:32:01.479771   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.479778   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:01.479784   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:01.479840   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:01.502828   54335 cri.go:89] found id: ""
	I1205 06:32:01.502841   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.502848   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:01.502853   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:01.502908   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:01.528675   54335 cri.go:89] found id: ""
	I1205 06:32:01.528688   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.528698   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:01.528704   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:01.528762   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:01.553405   54335 cri.go:89] found id: ""
	I1205 06:32:01.553419   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.553426   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:01.553431   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:01.553510   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:01.578373   54335 cri.go:89] found id: ""
	I1205 06:32:01.578387   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.578394   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:01.578400   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:01.578464   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:01.603666   54335 cri.go:89] found id: ""
	I1205 06:32:01.603689   54335 logs.go:282] 0 containers: []
	W1205 06:32:01.603697   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:01.603704   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:01.603714   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:01.661152   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:01.661181   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:01.672814   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:01.672831   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:01.736722   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:01.729093   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.729657   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731235   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731803   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.733404   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:01.729093   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.729657   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731235   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.731803   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:01.733404   13410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:01.736731   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:01.736742   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:01.799762   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:01.799780   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:04.328972   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:04.339381   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:04.339441   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:04.365391   54335 cri.go:89] found id: ""
	I1205 06:32:04.365405   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.365412   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:04.365418   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:04.365487   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:04.395556   54335 cri.go:89] found id: ""
	I1205 06:32:04.395570   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.395577   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:04.395582   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:04.395640   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:04.425328   54335 cri.go:89] found id: ""
	I1205 06:32:04.425341   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.425348   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:04.425354   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:04.425420   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:04.450514   54335 cri.go:89] found id: ""
	I1205 06:32:04.450528   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.450536   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:04.450541   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:04.450604   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:04.479372   54335 cri.go:89] found id: ""
	I1205 06:32:04.479386   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.479393   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:04.479398   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:04.479459   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:04.504452   54335 cri.go:89] found id: ""
	I1205 06:32:04.504466   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.504473   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:04.504479   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:04.504539   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:04.529609   54335 cri.go:89] found id: ""
	I1205 06:32:04.529622   54335 logs.go:282] 0 containers: []
	W1205 06:32:04.529629   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:04.529637   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:04.529649   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:04.584301   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:04.584319   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:04.595557   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:04.595572   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:04.660266   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:04.651668   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.652518   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654083   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654557   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.656089   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:04.651668   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.652518   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654083   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.654557   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:04.656089   13515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:04.660277   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:04.660288   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:04.723098   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:04.723115   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:07.257738   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:07.268081   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:07.268144   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:07.292559   54335 cri.go:89] found id: ""
	I1205 06:32:07.292573   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.292580   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:07.292585   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:07.292645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:07.316782   54335 cri.go:89] found id: ""
	I1205 06:32:07.316796   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.316803   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:07.316809   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:07.316869   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:07.346176   54335 cri.go:89] found id: ""
	I1205 06:32:07.346189   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.346196   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:07.346201   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:07.346263   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:07.378787   54335 cri.go:89] found id: ""
	I1205 06:32:07.378800   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.378807   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:07.378812   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:07.378869   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:07.406652   54335 cri.go:89] found id: ""
	I1205 06:32:07.406666   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.406673   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:07.406678   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:07.406746   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:07.438624   54335 cri.go:89] found id: ""
	I1205 06:32:07.438642   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.438649   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:07.438655   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:07.438726   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:07.464230   54335 cri.go:89] found id: ""
	I1205 06:32:07.464243   54335 logs.go:282] 0 containers: []
	W1205 06:32:07.464250   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:07.464257   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:07.464266   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:07.520945   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:07.520962   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:07.531896   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:07.531911   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:07.598302   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:07.588821   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.589395   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591106   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591684   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.594475   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:07.588821   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.589395   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591106   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.591684   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:07.594475   13618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:07.598317   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:07.598327   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:07.661122   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:07.661139   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:10.190348   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:10.201225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:10.201307   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:10.230433   54335 cri.go:89] found id: ""
	I1205 06:32:10.230446   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.230453   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:10.230458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:10.230512   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:10.254051   54335 cri.go:89] found id: ""
	I1205 06:32:10.254070   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.254077   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:10.254082   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:10.254140   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:10.278518   54335 cri.go:89] found id: ""
	I1205 06:32:10.278531   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.278538   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:10.278543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:10.278599   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:10.302979   54335 cri.go:89] found id: ""
	I1205 06:32:10.302992   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.302999   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:10.303004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:10.303059   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:10.331316   54335 cri.go:89] found id: ""
	I1205 06:32:10.331330   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.331337   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:10.331341   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:10.331400   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:10.362875   54335 cri.go:89] found id: ""
	I1205 06:32:10.362889   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.362896   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:10.362902   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:10.362959   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:10.393788   54335 cri.go:89] found id: ""
	I1205 06:32:10.393802   54335 logs.go:282] 0 containers: []
	W1205 06:32:10.393810   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:10.393818   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:10.393829   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:10.459886   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:10.452427   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.452934   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454546   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454986   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.456499   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:10.452427   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.452934   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454546   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.454986   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:10.456499   13716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:10.459895   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:10.459905   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:10.521460   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:10.521481   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:10.549040   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:10.549056   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:10.605396   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:10.605414   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:13.117854   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:13.128117   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:13.128179   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:13.153085   54335 cri.go:89] found id: ""
	I1205 06:32:13.153098   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.153105   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:13.153110   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:13.153199   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:13.178442   54335 cri.go:89] found id: ""
	I1205 06:32:13.178455   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.178462   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:13.178467   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:13.178524   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:13.203207   54335 cri.go:89] found id: ""
	I1205 06:32:13.203220   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.203229   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:13.203234   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:13.203292   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:13.228073   54335 cri.go:89] found id: ""
	I1205 06:32:13.228086   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.228093   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:13.228098   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:13.228159   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:13.253259   54335 cri.go:89] found id: ""
	I1205 06:32:13.253272   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.253288   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:13.253293   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:13.253350   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:13.278480   54335 cri.go:89] found id: ""
	I1205 06:32:13.278493   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.278500   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:13.278506   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:13.278562   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:13.301934   54335 cri.go:89] found id: ""
	I1205 06:32:13.301948   54335 logs.go:282] 0 containers: []
	W1205 06:32:13.301955   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:13.301962   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:13.301972   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:13.356855   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:13.356876   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:13.368331   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:13.368352   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:13.438131   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:13.429738   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.430489   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432231   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432823   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.434562   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:13.429738   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.430489   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432231   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.432823   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:13.434562   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:13.438141   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:13.438151   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:13.501680   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:13.501699   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:16.032304   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:16.042939   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:16.043006   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:16.069762   54335 cri.go:89] found id: ""
	I1205 06:32:16.069775   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.069782   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:16.069788   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:16.069844   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:16.094242   54335 cri.go:89] found id: ""
	I1205 06:32:16.094255   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.094264   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:16.094270   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:16.094336   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:16.120352   54335 cri.go:89] found id: ""
	I1205 06:32:16.120366   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.120373   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:16.120378   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:16.120435   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:16.149183   54335 cri.go:89] found id: ""
	I1205 06:32:16.149196   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.149203   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:16.149208   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:16.149270   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:16.179309   54335 cri.go:89] found id: ""
	I1205 06:32:16.179322   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.179328   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:16.179333   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:16.179388   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:16.204104   54335 cri.go:89] found id: ""
	I1205 06:32:16.204118   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.204125   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:16.204130   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:16.204190   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:16.230914   54335 cri.go:89] found id: ""
	I1205 06:32:16.230927   54335 logs.go:282] 0 containers: []
	W1205 06:32:16.230934   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:16.230941   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:16.230950   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:16.286405   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:16.286423   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:16.297122   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:16.297136   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:16.367421   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:16.357623   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.358522   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360113   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360727   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.362335   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:16.357623   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.358522   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360113   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.360727   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:16.362335   13929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:16.367430   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:16.367442   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:16.452050   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:16.452076   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:18.982231   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:18.992354   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:18.992412   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:19.017989   54335 cri.go:89] found id: ""
	I1205 06:32:19.018004   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.018011   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:19.018016   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:19.018077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:19.042217   54335 cri.go:89] found id: ""
	I1205 06:32:19.042230   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.042237   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:19.042242   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:19.042301   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:19.066699   54335 cri.go:89] found id: ""
	I1205 06:32:19.066713   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.066720   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:19.066725   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:19.066785   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:19.095590   54335 cri.go:89] found id: ""
	I1205 06:32:19.095603   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.095610   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:19.095616   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:19.095672   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:19.119155   54335 cri.go:89] found id: ""
	I1205 06:32:19.119169   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.119176   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:19.119181   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:19.119237   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:19.142787   54335 cri.go:89] found id: ""
	I1205 06:32:19.142801   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.142807   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:19.142813   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:19.142873   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:19.168013   54335 cri.go:89] found id: ""
	I1205 06:32:19.168025   54335 logs.go:282] 0 containers: []
	W1205 06:32:19.168032   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:19.168039   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:19.168051   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:19.178464   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:19.178481   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:19.240233   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:19.233298   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.233706   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235213   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235526   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.236960   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:19.233298   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.233706   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235213   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.235526   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:19.236960   14029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:19.240244   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:19.240253   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:19.300198   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:19.300217   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:19.329682   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:19.329697   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:21.888551   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:21.898274   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:21.898337   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:21.922474   54335 cri.go:89] found id: ""
	I1205 06:32:21.922486   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.922493   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:21.922498   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:21.922558   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:21.950761   54335 cri.go:89] found id: ""
	I1205 06:32:21.950775   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.950781   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:21.950786   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:21.950844   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:21.973829   54335 cri.go:89] found id: ""
	I1205 06:32:21.973843   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.973849   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:21.973854   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:21.973912   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:21.997620   54335 cri.go:89] found id: ""
	I1205 06:32:21.997634   54335 logs.go:282] 0 containers: []
	W1205 06:32:21.997641   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:21.997647   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:21.997702   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:22.033207   54335 cri.go:89] found id: ""
	I1205 06:32:22.033221   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.033228   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:22.033234   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:22.033296   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:22.062888   54335 cri.go:89] found id: ""
	I1205 06:32:22.062902   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.062909   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:22.062915   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:22.062973   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:22.091975   54335 cri.go:89] found id: ""
	I1205 06:32:22.091989   54335 logs.go:282] 0 containers: []
	W1205 06:32:22.091996   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:22.092004   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:22.092017   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:22.103145   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:22.103160   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:22.164851   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:22.156849   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.157640   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159268   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159573   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.161063   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:22.156849   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.157640   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159268   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.159573   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:22.161063   14129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:22.164860   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:22.164870   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:22.226105   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:22.226124   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:22.253915   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:22.253929   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:24.811993   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:24.821806   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:24.821865   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:24.845836   54335 cri.go:89] found id: ""
	I1205 06:32:24.845850   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.845857   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:24.845864   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:24.845919   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:24.870475   54335 cri.go:89] found id: ""
	I1205 06:32:24.870489   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.870496   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:24.870505   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:24.870560   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:24.895049   54335 cri.go:89] found id: ""
	I1205 06:32:24.895061   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.895068   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:24.895074   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:24.895130   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:24.924307   54335 cri.go:89] found id: ""
	I1205 06:32:24.924320   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.924327   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:24.924332   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:24.924390   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:24.949595   54335 cri.go:89] found id: ""
	I1205 06:32:24.949608   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.949616   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:24.949621   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:24.949680   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:24.974582   54335 cri.go:89] found id: ""
	I1205 06:32:24.974595   54335 logs.go:282] 0 containers: []
	W1205 06:32:24.974602   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:24.974607   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:24.974664   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:25.003723   54335 cri.go:89] found id: ""
	I1205 06:32:25.003739   54335 logs.go:282] 0 containers: []
	W1205 06:32:25.003747   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:25.003755   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:25.003766   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:25.065829   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:25.065846   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:25.077220   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:25.077236   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:25.140111   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:25.132731   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.133376   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.134862   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.135186   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.136712   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:25.132731   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.133376   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.134862   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.135186   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:25.136712   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:25.140121   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:25.140135   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:25.206118   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:25.206137   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:27.733938   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:27.744224   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:27.744282   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:27.769011   54335 cri.go:89] found id: ""
	I1205 06:32:27.769024   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.769031   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:27.769036   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:27.769094   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:27.793434   54335 cri.go:89] found id: ""
	I1205 06:32:27.793448   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.793455   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:27.793460   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:27.793556   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:27.821088   54335 cri.go:89] found id: ""
	I1205 06:32:27.821101   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.821108   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:27.821112   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:27.821209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:27.847229   54335 cri.go:89] found id: ""
	I1205 06:32:27.847242   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.847249   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:27.847254   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:27.847310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:27.870944   54335 cri.go:89] found id: ""
	I1205 06:32:27.870958   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.870965   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:27.870970   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:27.871031   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:27.895361   54335 cri.go:89] found id: ""
	I1205 06:32:27.895375   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.895382   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:27.895388   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:27.895445   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:27.920868   54335 cri.go:89] found id: ""
	I1205 06:32:27.920881   54335 logs.go:282] 0 containers: []
	W1205 06:32:27.920888   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:27.920897   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:27.920908   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:27.984326   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:27.984346   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:28.018053   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:28.018070   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:28.075646   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:28.075663   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:28.087097   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:28.087112   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:28.151403   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:28.143072   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.143826   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.145655   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.146333   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.147993   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:28.143072   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.143826   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.145655   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.146333   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:28.147993   14350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:30.651598   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:30.661458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:30.661527   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:30.689413   54335 cri.go:89] found id: ""
	I1205 06:32:30.689426   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.689443   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:30.689450   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:30.689523   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:30.712971   54335 cri.go:89] found id: ""
	I1205 06:32:30.712987   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.712994   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:30.712999   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:30.713057   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:30.737851   54335 cri.go:89] found id: ""
	I1205 06:32:30.737871   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.737879   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:30.737884   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:30.737945   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:30.761745   54335 cri.go:89] found id: ""
	I1205 06:32:30.761759   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.761766   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:30.761771   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:30.761836   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:30.784898   54335 cri.go:89] found id: ""
	I1205 06:32:30.784912   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.784919   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:30.784924   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:30.784980   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:30.810894   54335 cri.go:89] found id: ""
	I1205 06:32:30.810908   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.810915   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:30.810920   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:30.810976   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:30.839604   54335 cri.go:89] found id: ""
	I1205 06:32:30.839617   54335 logs.go:282] 0 containers: []
	W1205 06:32:30.839623   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:30.839636   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:30.839647   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:30.865641   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:30.865658   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:30.921606   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:30.921625   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:30.932281   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:30.932297   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:30.995168   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:30.987715   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.988222   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.989765   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.990119   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.991752   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:30.987715   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.988222   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.989765   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.990119   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:30.991752   14452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:30.995177   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:30.995187   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:33.558401   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:33.568813   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:33.568893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:33.596483   54335 cri.go:89] found id: ""
	I1205 06:32:33.596496   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.596503   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:33.596508   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:33.596566   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:33.624025   54335 cri.go:89] found id: ""
	I1205 06:32:33.624039   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.624046   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:33.624051   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:33.624108   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:33.655953   54335 cri.go:89] found id: ""
	I1205 06:32:33.655966   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.655974   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:33.655979   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:33.656039   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:33.684431   54335 cri.go:89] found id: ""
	I1205 06:32:33.684445   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.684452   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:33.684458   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:33.684517   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:33.710631   54335 cri.go:89] found id: ""
	I1205 06:32:33.710644   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.710651   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:33.710656   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:33.710714   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:33.735367   54335 cri.go:89] found id: ""
	I1205 06:32:33.735380   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.735387   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:33.735393   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:33.735450   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:33.759636   54335 cri.go:89] found id: ""
	I1205 06:32:33.759650   54335 logs.go:282] 0 containers: []
	W1205 06:32:33.759657   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:33.759664   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:33.759675   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:33.814547   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:33.814565   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:33.825805   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:33.825820   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:33.891604   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:33.884022   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.884634   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886235   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886812   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.888278   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:33.884022   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.884634   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886235   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.886812   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:33.888278   14546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:33.891614   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:33.891624   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:33.953767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:33.953787   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:36.482228   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:36.492694   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:36.492753   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:36.518206   54335 cri.go:89] found id: ""
	I1205 06:32:36.518222   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.518229   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:36.518233   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:36.518290   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:36.543531   54335 cri.go:89] found id: ""
	I1205 06:32:36.543544   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.543551   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:36.543556   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:36.543615   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:36.567286   54335 cri.go:89] found id: ""
	I1205 06:32:36.567299   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.567306   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:36.567311   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:36.567367   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:36.592165   54335 cri.go:89] found id: ""
	I1205 06:32:36.592178   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.592185   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:36.592190   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:36.592246   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:36.621238   54335 cri.go:89] found id: ""
	I1205 06:32:36.621251   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.621258   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:36.621264   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:36.621329   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:36.646816   54335 cri.go:89] found id: ""
	I1205 06:32:36.646838   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.646845   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:36.646850   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:36.646917   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:36.672562   54335 cri.go:89] found id: ""
	I1205 06:32:36.672575   54335 logs.go:282] 0 containers: []
	W1205 06:32:36.672582   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:36.672599   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:36.672609   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:36.727909   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:36.727926   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:36.738625   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:36.738641   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:36.803851   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:36.795935   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.796356   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.797950   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.798308   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.800017   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:36.795935   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.796356   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.797950   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.798308   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:36.800017   14652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:36.803861   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:36.803872   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:36.865831   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:36.865849   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:39.393852   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:39.404022   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:39.404090   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:39.433108   54335 cri.go:89] found id: ""
	I1205 06:32:39.433122   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.433129   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:39.433134   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:39.433218   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:39.458840   54335 cri.go:89] found id: ""
	I1205 06:32:39.458853   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.458862   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:39.458867   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:39.458923   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:39.483121   54335 cri.go:89] found id: ""
	I1205 06:32:39.483135   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.483142   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:39.483147   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:39.483203   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:39.508080   54335 cri.go:89] found id: ""
	I1205 06:32:39.508092   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.508100   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:39.508107   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:39.508166   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:39.532483   54335 cri.go:89] found id: ""
	I1205 06:32:39.532496   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.532503   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:39.532508   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:39.532563   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:39.556203   54335 cri.go:89] found id: ""
	I1205 06:32:39.556217   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.556224   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:39.556229   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:39.556286   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:39.579787   54335 cri.go:89] found id: ""
	I1205 06:32:39.579802   54335 logs.go:282] 0 containers: []
	W1205 06:32:39.579809   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:39.579818   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:39.579828   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:39.644828   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:39.644847   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:39.657327   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:39.657341   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:39.724034   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:39.716361   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.716905   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718372   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718880   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.720306   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:39.716361   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.716905   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718372   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.718880   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:39.720306   14758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:39.724044   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:39.724054   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:39.786205   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:39.786224   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:42.317043   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:42.327925   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:42.327988   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:42.353925   54335 cri.go:89] found id: ""
	I1205 06:32:42.353939   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.353946   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:42.353952   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:42.354013   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:42.385300   54335 cri.go:89] found id: ""
	I1205 06:32:42.385314   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.385321   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:42.385326   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:42.385385   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:42.411306   54335 cri.go:89] found id: ""
	I1205 06:32:42.411319   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.411326   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:42.411331   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:42.411389   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:42.436499   54335 cri.go:89] found id: ""
	I1205 06:32:42.436513   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.436520   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:42.436526   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:42.436590   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:42.461983   54335 cri.go:89] found id: ""
	I1205 06:32:42.462000   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.462008   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:42.462013   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:42.462072   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:42.490948   54335 cri.go:89] found id: ""
	I1205 06:32:42.490962   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.490971   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:42.490976   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:42.491036   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:42.515766   54335 cri.go:89] found id: ""
	I1205 06:32:42.515785   54335 logs.go:282] 0 containers: []
	W1205 06:32:42.515793   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:42.515800   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:42.515810   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:42.571249   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:42.571267   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:42.582146   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:42.582161   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:42.671227   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:42.659945   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.660555   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.665688   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.666233   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.667791   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:42.659945   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.660555   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.665688   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.666233   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:42.667791   14859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:42.671236   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:42.671247   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:42.733761   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:42.733780   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:45.261718   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:45.276631   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:45.276700   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:45.305280   54335 cri.go:89] found id: ""
	I1205 06:32:45.305296   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.305304   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:45.305309   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:45.305375   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:45.332314   54335 cri.go:89] found id: ""
	I1205 06:32:45.332407   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.332482   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:45.332488   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:45.332551   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:45.368080   54335 cri.go:89] found id: ""
	I1205 06:32:45.368141   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.368165   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:45.368171   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:45.368336   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:45.400257   54335 cri.go:89] found id: ""
	I1205 06:32:45.400284   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.400292   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:45.400298   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:45.400368   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:45.425301   54335 cri.go:89] found id: ""
	I1205 06:32:45.425314   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.425321   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:45.425327   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:45.425385   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:45.450756   54335 cri.go:89] found id: ""
	I1205 06:32:45.450769   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.450777   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:45.450782   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:45.450845   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:45.481391   54335 cri.go:89] found id: ""
	I1205 06:32:45.481405   54335 logs.go:282] 0 containers: []
	W1205 06:32:45.481413   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:45.481421   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:45.481441   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:45.539446   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:45.539465   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:45.550849   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:45.550865   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:45.628789   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:45.621303   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.621758   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623274   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623572   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.625028   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:45.621303   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.621758   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623274   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.623572   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:45.625028   14966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:45.628800   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:45.628810   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:45.699540   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:45.699558   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:48.227049   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:48.237481   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:48.237550   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:48.267696   54335 cri.go:89] found id: ""
	I1205 06:32:48.267709   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.267716   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:48.267721   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:48.267789   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:48.294097   54335 cri.go:89] found id: ""
	I1205 06:32:48.294112   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.294118   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:48.294124   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:48.294186   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:48.324117   54335 cri.go:89] found id: ""
	I1205 06:32:48.324131   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.324139   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:48.324144   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:48.324203   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:48.349743   54335 cri.go:89] found id: ""
	I1205 06:32:48.349758   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.349765   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:48.349781   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:48.349849   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:48.379197   54335 cri.go:89] found id: ""
	I1205 06:32:48.379211   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.379219   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:48.379225   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:48.379283   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:48.404472   54335 cri.go:89] found id: ""
	I1205 06:32:48.404486   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.404493   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:48.404499   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:48.404555   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:48.430058   54335 cri.go:89] found id: ""
	I1205 06:32:48.430072   54335 logs.go:282] 0 containers: []
	W1205 06:32:48.430079   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:48.430086   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:48.430099   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:48.459503   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:48.459519   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:48.518141   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:48.518158   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:48.529014   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:48.529031   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:48.601337   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:48.590875   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.591327   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.593747   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.595586   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.596337   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:48.590875   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.591327   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.593747   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.595586   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:48.596337   15085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:48.601347   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:48.601357   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:51.177615   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:51.187543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:51.187599   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:51.218589   54335 cri.go:89] found id: ""
	I1205 06:32:51.218603   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.218610   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:51.218615   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:51.218673   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:51.243490   54335 cri.go:89] found id: ""
	I1205 06:32:51.243509   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.243516   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:51.243521   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:51.243577   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:51.268372   54335 cri.go:89] found id: ""
	I1205 06:32:51.268385   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.268393   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:51.268398   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:51.268458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:51.292432   54335 cri.go:89] found id: ""
	I1205 06:32:51.292445   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.292452   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:51.292457   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:51.292513   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:51.316338   54335 cri.go:89] found id: ""
	I1205 06:32:51.316351   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.316358   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:51.316364   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:51.316419   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:51.341611   54335 cri.go:89] found id: ""
	I1205 06:32:51.341625   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.341645   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:51.341650   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:51.341708   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:51.365650   54335 cri.go:89] found id: ""
	I1205 06:32:51.365664   54335 logs.go:282] 0 containers: []
	W1205 06:32:51.365671   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:51.365679   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:51.365690   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:51.377639   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:51.377655   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:51.443518   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:51.435665   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.436407   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438103   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438498   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.439930   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:51.435665   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.436407   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438103   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.438498   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:51.439930   15179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:51.443527   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:51.443540   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:51.505744   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:51.505763   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:51.532869   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:51.532884   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:54.096225   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:54.106698   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:54.106760   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:54.134689   54335 cri.go:89] found id: ""
	I1205 06:32:54.134702   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.134709   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:54.134714   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:54.134769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:54.158113   54335 cri.go:89] found id: ""
	I1205 06:32:54.158126   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.158133   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:54.158138   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:54.158199   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:54.182422   54335 cri.go:89] found id: ""
	I1205 06:32:54.182436   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.182444   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:54.182448   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:54.182508   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:54.206399   54335 cri.go:89] found id: ""
	I1205 06:32:54.206412   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.206418   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:54.206423   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:54.206481   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:54.229926   54335 cri.go:89] found id: ""
	I1205 06:32:54.229940   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.229947   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:54.229952   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:54.230011   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:54.254356   54335 cri.go:89] found id: ""
	I1205 06:32:54.254370   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.254377   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:54.254382   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:54.254441   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:54.278495   54335 cri.go:89] found id: ""
	I1205 06:32:54.278508   54335 logs.go:282] 0 containers: []
	W1205 06:32:54.278516   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:54.278523   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:54.278533   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:54.305603   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:54.305619   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:54.360184   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:54.360202   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:54.371510   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:54.371525   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:54.438927   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:54.429388   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.430239   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.432334   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.433110   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.435152   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:54.429388   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.430239   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.432334   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.433110   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:54.435152   15292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:54.438936   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:54.438947   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:57.002913   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:57.020172   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:57.020235   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:57.044543   54335 cri.go:89] found id: ""
	I1205 06:32:57.044556   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.044564   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:57.044570   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:57.044629   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:57.070053   54335 cri.go:89] found id: ""
	I1205 06:32:57.070067   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.070074   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:57.070079   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:57.070134   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:32:57.094644   54335 cri.go:89] found id: ""
	I1205 06:32:57.094659   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.094666   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:32:57.094670   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:32:57.094769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:32:57.118698   54335 cri.go:89] found id: ""
	I1205 06:32:57.118722   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.118729   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:32:57.118734   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:32:57.118799   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:32:57.142854   54335 cri.go:89] found id: ""
	I1205 06:32:57.142868   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.142875   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:32:57.142881   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:32:57.142946   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:32:57.171220   54335 cri.go:89] found id: ""
	I1205 06:32:57.171234   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.171241   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:32:57.171246   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:32:57.171311   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:32:57.195529   54335 cri.go:89] found id: ""
	I1205 06:32:57.195544   54335 logs.go:282] 0 containers: []
	W1205 06:32:57.195551   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:32:57.195558   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:32:57.195578   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:32:57.251284   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:32:57.251305   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:32:57.262555   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:32:57.262570   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:32:57.333629   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:32:57.326387   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.326886   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328440   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328930   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.330375   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:32:57.326387   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.326886   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328440   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.328930   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:32:57.330375   15383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:32:57.333638   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:32:57.333651   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:32:57.394773   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:32:57.394791   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:32:59.923047   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:32:59.933128   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:32:59.933207   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:32:59.960876   54335 cri.go:89] found id: ""
	I1205 06:32:59.960890   54335 logs.go:282] 0 containers: []
	W1205 06:32:59.960896   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:32:59.960901   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:32:59.960961   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:32:59.985649   54335 cri.go:89] found id: ""
	I1205 06:32:59.985664   54335 logs.go:282] 0 containers: []
	W1205 06:32:59.985671   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:32:59.985676   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:32:59.985737   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:00.069985   54335 cri.go:89] found id: ""
	I1205 06:33:00.070002   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.070019   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:00.070026   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:00.070103   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:00.156917   54335 cri.go:89] found id: ""
	I1205 06:33:00.156936   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.156945   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:00.156958   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:00.157043   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:00.284647   54335 cri.go:89] found id: ""
	I1205 06:33:00.284663   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.284672   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:00.284678   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:00.284758   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:00.335248   54335 cri.go:89] found id: ""
	I1205 06:33:00.335263   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.335271   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:00.335280   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:00.335365   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:00.377235   54335 cri.go:89] found id: ""
	I1205 06:33:00.377251   54335 logs.go:282] 0 containers: []
	W1205 06:33:00.377259   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:00.377267   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:00.377291   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:00.390543   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:00.390561   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:00.464312   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:00.454965   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.455845   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.457669   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.458537   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.460402   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:00.454965   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.455845   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.457669   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.458537   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:00.460402   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:00.464323   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:00.464334   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:00.528767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:00.528786   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:00.562265   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:00.562282   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:03.126784   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:03.137248   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:03.137309   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:03.163136   54335 cri.go:89] found id: ""
	I1205 06:33:03.163149   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.163156   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:03.163161   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:03.163221   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:03.189239   54335 cri.go:89] found id: ""
	I1205 06:33:03.189253   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.189261   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:03.189277   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:03.189340   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:03.215019   54335 cri.go:89] found id: ""
	I1205 06:33:03.215032   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.215039   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:03.215045   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:03.215104   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:03.240336   54335 cri.go:89] found id: ""
	I1205 06:33:03.240350   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.240357   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:03.240362   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:03.240421   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:03.264735   54335 cri.go:89] found id: ""
	I1205 06:33:03.264749   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.264762   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:03.264767   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:03.264831   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:03.289528   54335 cri.go:89] found id: ""
	I1205 06:33:03.289541   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.289548   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:03.289553   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:03.289658   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:03.315032   54335 cri.go:89] found id: ""
	I1205 06:33:03.315046   54335 logs.go:282] 0 containers: []
	W1205 06:33:03.315053   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:03.315060   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:03.315071   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:03.371569   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:03.371588   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:03.382809   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:03.382825   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:03.450556   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:03.442547   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.443142   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445000   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445833   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.446990   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:03.442547   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.443142   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445000   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.445833   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:03.446990   15586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:03.450566   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:03.450577   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:03.516929   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:03.516948   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:06.046009   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:06.057281   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:06.057355   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:06.084601   54335 cri.go:89] found id: ""
	I1205 06:33:06.084615   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.084623   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:06.084629   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:06.084690   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:06.111286   54335 cri.go:89] found id: ""
	I1205 06:33:06.111300   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.111307   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:06.111313   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:06.111374   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:06.136965   54335 cri.go:89] found id: ""
	I1205 06:33:06.136978   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.136985   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:06.136990   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:06.137048   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:06.162299   54335 cri.go:89] found id: ""
	I1205 06:33:06.162312   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.162319   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:06.162325   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:06.162387   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:06.189555   54335 cri.go:89] found id: ""
	I1205 06:33:06.189569   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.189576   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:06.189581   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:06.189645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:06.215170   54335 cri.go:89] found id: ""
	I1205 06:33:06.215184   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.215192   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:06.215198   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:06.215258   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:06.241073   54335 cri.go:89] found id: ""
	I1205 06:33:06.241087   54335 logs.go:282] 0 containers: []
	W1205 06:33:06.241094   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:06.241112   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:06.241123   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:06.296188   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:06.296205   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:06.306926   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:06.306941   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:06.371295   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:06.363444   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.364162   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.365700   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.366364   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.367956   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:06.363444   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.364162   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.365700   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.366364   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:06.367956   15692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:06.371304   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:06.371316   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:06.432933   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:06.432951   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:08.969294   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:08.979402   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:08.979463   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:09.020683   54335 cri.go:89] found id: ""
	I1205 06:33:09.020697   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.020704   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:09.020710   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:09.020771   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:09.046109   54335 cri.go:89] found id: ""
	I1205 06:33:09.046123   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.046130   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:09.046136   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:09.046195   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:09.070968   54335 cri.go:89] found id: ""
	I1205 06:33:09.070981   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.070988   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:09.070995   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:09.071056   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:09.096098   54335 cri.go:89] found id: ""
	I1205 06:33:09.096111   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.096118   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:09.096123   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:09.096226   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:09.121468   54335 cri.go:89] found id: ""
	I1205 06:33:09.121482   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.121489   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:09.121495   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:09.121573   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:09.150975   54335 cri.go:89] found id: ""
	I1205 06:33:09.150989   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.150997   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:09.151004   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:09.151063   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:09.176504   54335 cri.go:89] found id: ""
	I1205 06:33:09.176517   54335 logs.go:282] 0 containers: []
	W1205 06:33:09.176527   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:09.176534   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:09.176545   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:09.203288   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:09.203302   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:09.259402   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:09.259423   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:09.270454   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:09.270470   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:09.334084   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:09.326438   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.326872   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328506   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328861   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.330463   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:09.326438   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.326872   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328506   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.328861   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:09.330463   15807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:09.334095   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:09.334105   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:11.894816   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:11.904810   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:11.904871   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:11.930015   54335 cri.go:89] found id: ""
	I1205 06:33:11.930029   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.930036   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:11.930042   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:11.930100   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:11.954795   54335 cri.go:89] found id: ""
	I1205 06:33:11.954808   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.954815   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:11.954821   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:11.954877   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:11.978195   54335 cri.go:89] found id: ""
	I1205 06:33:11.978208   54335 logs.go:282] 0 containers: []
	W1205 06:33:11.978231   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:11.978236   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:11.978292   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:12.003210   54335 cri.go:89] found id: ""
	I1205 06:33:12.003227   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.003235   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:12.003241   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:12.003326   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:12.033020   54335 cri.go:89] found id: ""
	I1205 06:33:12.033034   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.033041   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:12.033046   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:12.033111   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:12.058060   54335 cri.go:89] found id: ""
	I1205 06:33:12.058073   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.058081   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:12.058086   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:12.058143   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:12.082699   54335 cri.go:89] found id: ""
	I1205 06:33:12.082713   54335 logs.go:282] 0 containers: []
	W1205 06:33:12.082719   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:12.082727   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:12.082737   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:12.151250   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:12.142947   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.143602   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145353   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145952   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.147593   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:12.142947   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.143602   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145353   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.145952   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:12.147593   15897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:12.151259   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:12.151271   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:12.218438   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:12.218461   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:12.248241   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:12.248260   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:12.307820   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:12.307838   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:14.820623   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:14.830697   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:14.830756   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:14.863478   54335 cri.go:89] found id: ""
	I1205 06:33:14.863492   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.863499   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:14.863504   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:14.863565   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:14.895084   54335 cri.go:89] found id: ""
	I1205 06:33:14.895098   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.895106   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:14.895111   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:14.895172   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:14.925468   54335 cri.go:89] found id: ""
	I1205 06:33:14.925482   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.925489   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:14.925494   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:14.925614   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:14.954925   54335 cri.go:89] found id: ""
	I1205 06:33:14.954938   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.954945   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:14.954950   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:14.955009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:14.980066   54335 cri.go:89] found id: ""
	I1205 06:33:14.980080   54335 logs.go:282] 0 containers: []
	W1205 06:33:14.980088   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:14.980093   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:14.980152   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:15.028743   54335 cri.go:89] found id: ""
	I1205 06:33:15.028763   54335 logs.go:282] 0 containers: []
	W1205 06:33:15.028770   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:15.028777   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:15.028845   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:15.057623   54335 cri.go:89] found id: ""
	I1205 06:33:15.057636   54335 logs.go:282] 0 containers: []
	W1205 06:33:15.057643   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:15.057650   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:15.057661   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:15.114789   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:15.114808   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:15.126224   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:15.126240   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:15.193033   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:15.184929   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.185789   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187491   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187821   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.189530   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:15.184929   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.185789   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187491   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.187821   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:15.189530   16005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:15.193044   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:15.193054   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:15.256748   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:15.256767   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:17.786454   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:17.796729   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:17.796787   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:17.825815   54335 cri.go:89] found id: ""
	I1205 06:33:17.825828   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.825835   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:17.825840   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:17.825900   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:17.855661   54335 cri.go:89] found id: ""
	I1205 06:33:17.855675   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.855682   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:17.855687   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:17.855744   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:17.883175   54335 cri.go:89] found id: ""
	I1205 06:33:17.883188   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.883195   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:17.883200   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:17.883260   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:17.911578   54335 cri.go:89] found id: ""
	I1205 06:33:17.911592   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.911599   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:17.911604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:17.911662   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:17.939731   54335 cri.go:89] found id: ""
	I1205 06:33:17.939750   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.939758   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:17.939763   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:17.939818   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:17.968310   54335 cri.go:89] found id: ""
	I1205 06:33:17.968323   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.968330   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:17.968335   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:17.968392   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:17.992739   54335 cri.go:89] found id: ""
	I1205 06:33:17.992752   54335 logs.go:282] 0 containers: []
	W1205 06:33:17.992759   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:17.992765   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:17.992776   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:18.006966   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:18.006985   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:18.077932   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:18.067988   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.068697   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.071196   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.072003   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.073192   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:18.067988   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.068697   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.071196   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.072003   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:18.073192   16105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:18.077943   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:18.077954   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:18.141190   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:18.141206   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:18.172978   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:18.172995   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:20.730714   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:20.741267   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:20.741329   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:20.765738   54335 cri.go:89] found id: ""
	I1205 06:33:20.765751   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.765758   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:20.765763   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:20.765821   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:20.790360   54335 cri.go:89] found id: ""
	I1205 06:33:20.790373   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.790380   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:20.790385   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:20.790446   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:20.815276   54335 cri.go:89] found id: ""
	I1205 06:33:20.815290   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.815297   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:20.815302   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:20.815361   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:20.840257   54335 cri.go:89] found id: ""
	I1205 06:33:20.840270   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.840277   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:20.840283   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:20.840345   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:20.869989   54335 cri.go:89] found id: ""
	I1205 06:33:20.870003   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.870010   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:20.870015   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:20.870077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:20.908890   54335 cri.go:89] found id: ""
	I1205 06:33:20.908903   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.908915   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:20.908921   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:20.908978   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:20.935421   54335 cri.go:89] found id: ""
	I1205 06:33:20.935435   54335 logs.go:282] 0 containers: []
	W1205 06:33:20.935442   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:20.935450   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:20.935460   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:20.946582   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:20.946597   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:21.010138   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:20.999742   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.000436   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.002782   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.003697   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.005641   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:20.999742   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.000436   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.002782   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.003697   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:21.005641   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:21.010149   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:21.010172   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:21.077392   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:21.077409   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:21.105240   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:21.105255   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:23.662909   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:23.672961   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:23.673022   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:23.697989   54335 cri.go:89] found id: ""
	I1205 06:33:23.698003   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.698010   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:23.698016   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:23.698078   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:23.723698   54335 cri.go:89] found id: ""
	I1205 06:33:23.723712   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.723718   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:23.723723   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:23.723781   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:23.747403   54335 cri.go:89] found id: ""
	I1205 06:33:23.747416   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.747423   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:23.747428   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:23.747486   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:23.775201   54335 cri.go:89] found id: ""
	I1205 06:33:23.775214   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.775221   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:23.775227   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:23.775290   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:23.799494   54335 cri.go:89] found id: ""
	I1205 06:33:23.799507   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.799514   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:23.799519   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:23.799575   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:23.824229   54335 cri.go:89] found id: ""
	I1205 06:33:23.824242   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.824249   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:23.824254   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:23.824310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:23.851738   54335 cri.go:89] found id: ""
	I1205 06:33:23.851752   54335 logs.go:282] 0 containers: []
	W1205 06:33:23.851759   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:23.851767   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:23.851777   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:23.897695   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:23.897710   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:23.961464   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:23.961482   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:23.972542   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:23.972558   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:24.046391   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:24.038441   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.039274   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.040964   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.041464   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.043066   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:24.038441   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.039274   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.040964   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.041464   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:24.043066   16325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:24.046402   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:24.046414   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:26.611978   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:26.621743   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:26.621802   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:26.645855   54335 cri.go:89] found id: ""
	I1205 06:33:26.645868   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.645875   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:26.645879   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:26.645934   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:26.675349   54335 cri.go:89] found id: ""
	I1205 06:33:26.675363   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.675369   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:26.675374   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:26.675430   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:26.698540   54335 cri.go:89] found id: ""
	I1205 06:33:26.698554   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.698561   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:26.698566   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:26.698630   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:26.721264   54335 cri.go:89] found id: ""
	I1205 06:33:26.721277   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.721283   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:26.721288   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:26.721343   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:26.744526   54335 cri.go:89] found id: ""
	I1205 06:33:26.744539   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.744546   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:26.744551   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:26.744607   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:26.767695   54335 cri.go:89] found id: ""
	I1205 06:33:26.767719   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.767727   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:26.767732   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:26.767792   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:26.791289   54335 cri.go:89] found id: ""
	I1205 06:33:26.791329   54335 logs.go:282] 0 containers: []
	W1205 06:33:26.791336   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:26.791344   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:26.791354   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:26.856152   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:26.845400   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.846423   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.848401   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.849234   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.850202   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:26.845400   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.846423   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.848401   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.849234   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:26.850202   16409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:26.856162   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:26.856173   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:26.930967   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:26.930987   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:26.958183   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:26.958200   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:27.015910   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:27.015927   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:29.527097   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:29.537027   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:29.537087   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:29.561570   54335 cri.go:89] found id: ""
	I1205 06:33:29.561583   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.561591   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:29.561598   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:29.561655   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:29.586431   54335 cri.go:89] found id: ""
	I1205 06:33:29.586445   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.586452   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:29.586474   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:29.586543   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:29.615124   54335 cri.go:89] found id: ""
	I1205 06:33:29.615139   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.615145   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:29.615151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:29.615208   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:29.640801   54335 cri.go:89] found id: ""
	I1205 06:33:29.640814   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.640831   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:29.640837   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:29.640893   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:29.665711   54335 cri.go:89] found id: ""
	I1205 06:33:29.665725   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.665731   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:29.665737   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:29.665797   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:29.690393   54335 cri.go:89] found id: ""
	I1205 06:33:29.690416   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.690423   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:29.690428   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:29.690500   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:29.714522   54335 cri.go:89] found id: ""
	I1205 06:33:29.714535   54335 logs.go:282] 0 containers: []
	W1205 06:33:29.714542   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:29.714550   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:29.714562   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:29.770787   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:29.770804   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:29.781149   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:29.781179   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:29.848588   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:29.838965   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.839369   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.840958   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.841406   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.842881   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:29.838965   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.839369   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.840958   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.841406   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:29.842881   16518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:29.848601   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:29.848612   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:29.927646   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:29.927665   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:32.455807   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:32.466055   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:32.466118   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:32.490796   54335 cri.go:89] found id: ""
	I1205 06:33:32.490809   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.490816   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:32.490822   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:32.490881   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:32.515490   54335 cri.go:89] found id: ""
	I1205 06:33:32.515503   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.515511   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:32.515516   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:32.515577   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:32.543147   54335 cri.go:89] found id: ""
	I1205 06:33:32.543161   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.543167   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:32.543172   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:32.543234   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:32.567288   54335 cri.go:89] found id: ""
	I1205 06:33:32.567301   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.567308   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:32.567313   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:32.567370   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:32.594765   54335 cri.go:89] found id: ""
	I1205 06:33:32.594778   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.594785   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:32.594790   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:32.594846   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:32.628174   54335 cri.go:89] found id: ""
	I1205 06:33:32.628187   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.628208   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:32.628223   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:32.628310   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:32.653204   54335 cri.go:89] found id: ""
	I1205 06:33:32.653218   54335 logs.go:282] 0 containers: []
	W1205 06:33:32.653225   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:32.653232   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:32.653242   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:32.713436   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:32.713452   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:32.723879   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:32.723894   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:32.788746   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:32.780259   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.780873   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.782713   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.783204   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.784855   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:32.780259   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.780873   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.782713   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.783204   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:32.784855   16624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:32.788757   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:32.788767   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:32.850792   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:32.850809   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:35.388187   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:35.398195   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:35.398254   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:35.421975   54335 cri.go:89] found id: ""
	I1205 06:33:35.421989   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.421996   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:35.422002   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:35.422065   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:35.445920   54335 cri.go:89] found id: ""
	I1205 06:33:35.445934   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.445942   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:35.445947   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:35.446009   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:35.471144   54335 cri.go:89] found id: ""
	I1205 06:33:35.471157   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.471164   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:35.471169   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:35.471231   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:35.495788   54335 cri.go:89] found id: ""
	I1205 06:33:35.495802   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.495808   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:35.495814   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:35.495871   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:35.524598   54335 cri.go:89] found id: ""
	I1205 06:33:35.524621   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.524628   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:35.524633   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:35.524701   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:35.549143   54335 cri.go:89] found id: ""
	I1205 06:33:35.549227   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.549235   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:35.549242   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:35.549301   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:35.574312   54335 cri.go:89] found id: ""
	I1205 06:33:35.574325   54335 logs.go:282] 0 containers: []
	W1205 06:33:35.574332   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:35.574340   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:35.574352   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:35.628890   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:35.628908   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:35.639919   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:35.639934   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:35.703264   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:35.695689   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.696286   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.697814   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.698255   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.699741   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:35.695689   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.696286   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.697814   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.698255   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:35.699741   16730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:35.703273   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:35.703286   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:35.766049   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:35.766067   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:38.297790   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:38.307702   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:38.307762   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:38.336326   54335 cri.go:89] found id: ""
	I1205 06:33:38.336340   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.336348   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:38.336353   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:38.336410   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:38.361342   54335 cri.go:89] found id: ""
	I1205 06:33:38.361356   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.361363   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:38.361371   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:38.361429   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:38.385186   54335 cri.go:89] found id: ""
	I1205 06:33:38.385200   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.385208   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:38.385213   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:38.385281   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:38.413803   54335 cri.go:89] found id: ""
	I1205 06:33:38.413816   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.413824   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:38.413829   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:38.413889   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:38.437536   54335 cri.go:89] found id: ""
	I1205 06:33:38.437572   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.437579   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:38.437585   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:38.437645   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:38.462979   54335 cri.go:89] found id: ""
	I1205 06:33:38.462993   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.463000   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:38.463006   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:38.463069   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:38.488151   54335 cri.go:89] found id: ""
	I1205 06:33:38.488163   54335 logs.go:282] 0 containers: []
	W1205 06:33:38.488170   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:38.488186   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:38.488196   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:38.544680   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:38.544696   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:38.555626   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:38.555641   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:38.618692   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:38.610579   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.611054   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.612674   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.613205   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.614695   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:38.610579   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.611054   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.612674   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.613205   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:38.614695   16837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:38.618701   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:38.618712   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:38.682609   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:38.682629   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:41.211631   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:41.221454   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:41.221514   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:41.245435   54335 cri.go:89] found id: ""
	I1205 06:33:41.245448   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.245455   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:41.245460   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:41.245516   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:41.268900   54335 cri.go:89] found id: ""
	I1205 06:33:41.268913   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.268920   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:41.268925   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:41.268980   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:41.297438   54335 cri.go:89] found id: ""
	I1205 06:33:41.297452   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.297460   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:41.297471   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:41.297536   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:41.325936   54335 cri.go:89] found id: ""
	I1205 06:33:41.325949   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.325956   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:41.325962   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:41.326036   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:41.354117   54335 cri.go:89] found id: ""
	I1205 06:33:41.354131   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.354138   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:41.354152   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:41.354209   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:41.378638   54335 cri.go:89] found id: ""
	I1205 06:33:41.378651   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.378658   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:41.378664   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:41.378720   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:41.407136   54335 cri.go:89] found id: ""
	I1205 06:33:41.407150   54335 logs.go:282] 0 containers: []
	W1205 06:33:41.407157   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:41.407164   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:41.407176   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:41.466362   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:41.466385   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:41.477977   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:41.477993   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:41.544052   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:41.534487   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.535316   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537008   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537377   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.540464   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:41.534487   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.535316   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537008   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.537377   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:41.540464   16941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:41.544062   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:41.544073   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:41.606455   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:41.606472   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:44.134370   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:44.145440   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:44.145497   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:44.171961   54335 cri.go:89] found id: ""
	I1205 06:33:44.171975   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.171982   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:44.171987   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:44.172046   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:44.197113   54335 cri.go:89] found id: ""
	I1205 06:33:44.197127   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.197134   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:44.197138   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:44.197210   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:44.222364   54335 cri.go:89] found id: ""
	I1205 06:33:44.222378   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.222385   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:44.222390   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:44.222449   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:44.252062   54335 cri.go:89] found id: ""
	I1205 06:33:44.252075   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.252082   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:44.252087   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:44.252143   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:44.277356   54335 cri.go:89] found id: ""
	I1205 06:33:44.277370   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.277377   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:44.277382   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:44.277440   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:44.302126   54335 cri.go:89] found id: ""
	I1205 06:33:44.302139   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.302146   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:44.302151   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:44.302214   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:44.326368   54335 cri.go:89] found id: ""
	I1205 06:33:44.326382   54335 logs.go:282] 0 containers: []
	W1205 06:33:44.326389   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:44.326396   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:44.326406   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:44.382509   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:44.382526   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:44.393060   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:44.393075   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:44.454175   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:44.446190   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.447001   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.448578   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.449122   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.450707   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:44.446190   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.447001   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.448578   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.449122   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:44.450707   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:44.454185   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:44.454195   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:44.516835   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:44.516854   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:47.045086   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:47.055463   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:47.055525   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:47.080371   54335 cri.go:89] found id: ""
	I1205 06:33:47.080384   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.080391   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:47.080396   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:47.080458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:47.119514   54335 cri.go:89] found id: ""
	I1205 06:33:47.119527   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.119535   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:47.119539   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:47.119594   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:47.147444   54335 cri.go:89] found id: ""
	I1205 06:33:47.147457   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.147464   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:47.147469   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:47.147523   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:47.177712   54335 cri.go:89] found id: ""
	I1205 06:33:47.177726   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.177733   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:47.177738   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:47.177800   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:47.202097   54335 cri.go:89] found id: ""
	I1205 06:33:47.202110   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.202118   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:47.202124   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:47.202179   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:47.226333   54335 cri.go:89] found id: ""
	I1205 06:33:47.226347   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.226354   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:47.226359   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:47.226431   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:47.251986   54335 cri.go:89] found id: ""
	I1205 06:33:47.251999   54335 logs.go:282] 0 containers: []
	W1205 06:33:47.252007   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:47.252014   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:47.252025   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:47.308015   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:47.308032   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:47.318805   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:47.318820   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:47.387458   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:47.379184   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.379724   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.381602   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.382334   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.383761   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:47.379184   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.379724   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.381602   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.382334   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:47.383761   17151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:47.387468   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:47.387478   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:47.448913   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:47.448930   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:49.981882   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:49.991852   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:49.991908   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:50.023200   54335 cri.go:89] found id: ""
	I1205 06:33:50.023221   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.023229   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:50.023235   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:50.023306   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:50.049577   54335 cri.go:89] found id: ""
	I1205 06:33:50.049591   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.049598   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:50.049604   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:50.049665   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:50.078681   54335 cri.go:89] found id: ""
	I1205 06:33:50.078695   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.078703   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:50.078708   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:50.078769   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:50.115465   54335 cri.go:89] found id: ""
	I1205 06:33:50.115478   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.115485   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:50.115496   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:50.115554   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:50.146578   54335 cri.go:89] found id: ""
	I1205 06:33:50.146591   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.146598   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:50.146603   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:50.146661   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:50.175515   54335 cri.go:89] found id: ""
	I1205 06:33:50.175528   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.175535   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:50.175541   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:50.175598   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:50.204420   54335 cri.go:89] found id: ""
	I1205 06:33:50.204433   54335 logs.go:282] 0 containers: []
	W1205 06:33:50.204440   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:50.204449   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:50.204458   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:50.258843   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:50.258860   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:50.269324   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:50.269339   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:50.336484   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:50.328749   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.329537   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331160   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331458   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.332932   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:50.328749   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.329537   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331160   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.331458   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:50.332932   17261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:50.336493   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:50.336515   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:50.399746   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:50.399764   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:52.927181   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:52.937445   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:52.937504   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:52.960934   54335 cri.go:89] found id: ""
	I1205 06:33:52.960947   54335 logs.go:282] 0 containers: []
	W1205 06:33:52.960954   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:52.960960   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:52.961022   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:52.986242   54335 cri.go:89] found id: ""
	I1205 06:33:52.986255   54335 logs.go:282] 0 containers: []
	W1205 06:33:52.986263   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:52.986268   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:52.986327   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:53.013571   54335 cri.go:89] found id: ""
	I1205 06:33:53.013585   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.013592   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:53.013597   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:53.013660   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:53.039257   54335 cri.go:89] found id: ""
	I1205 06:33:53.039271   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.039278   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:53.039284   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:53.039341   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:53.064162   54335 cri.go:89] found id: ""
	I1205 06:33:53.064174   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.064197   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:53.064202   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:53.064259   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:53.090118   54335 cri.go:89] found id: ""
	I1205 06:33:53.090131   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.090138   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:53.090143   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:53.090211   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:53.129452   54335 cri.go:89] found id: ""
	I1205 06:33:53.129464   54335 logs.go:282] 0 containers: []
	W1205 06:33:53.129471   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:53.129478   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:53.129489   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:53.192396   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:53.192413   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:53.203770   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:53.203784   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:53.268406   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:53.260521   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.261282   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.262897   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.263184   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.264650   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:53.260521   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.261282   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.262897   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.263184   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:53.264650   17362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:53.268415   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:53.268427   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:53.331135   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:53.331156   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:55.857914   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:55.868426   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:55.868484   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:55.893813   54335 cri.go:89] found id: ""
	I1205 06:33:55.893826   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.893833   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:55.893838   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:55.893898   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:55.917807   54335 cri.go:89] found id: ""
	I1205 06:33:55.917820   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.917827   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:55.917832   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:55.917890   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:55.942437   54335 cri.go:89] found id: ""
	I1205 06:33:55.942450   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.942457   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:55.942462   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:55.942520   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:55.967048   54335 cri.go:89] found id: ""
	I1205 06:33:55.967061   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.967069   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:55.967075   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:55.967134   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:55.995796   54335 cri.go:89] found id: ""
	I1205 06:33:55.995809   54335 logs.go:282] 0 containers: []
	W1205 06:33:55.995817   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:55.995822   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:55.995888   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:56.024165   54335 cri.go:89] found id: ""
	I1205 06:33:56.024179   54335 logs.go:282] 0 containers: []
	W1205 06:33:56.024186   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:56.024192   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:56.024255   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:56.050928   54335 cri.go:89] found id: ""
	I1205 06:33:56.050942   54335 logs.go:282] 0 containers: []
	W1205 06:33:56.050949   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:56.050957   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:56.050966   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:33:56.108175   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:56.108193   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:56.120521   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:56.120536   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:56.188922   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:56.181151   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.181776   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.183592   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.184091   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.185670   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:56.181151   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.181776   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.183592   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.184091   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:56.185670   17466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:56.188933   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:56.188944   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:56.250795   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:56.250813   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:58.783821   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:33:58.794017   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:33:58.794077   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:33:58.818887   54335 cri.go:89] found id: ""
	I1205 06:33:58.818900   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.818907   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:33:58.818913   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:33:58.818970   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:33:58.843085   54335 cri.go:89] found id: ""
	I1205 06:33:58.843098   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.843105   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:33:58.843111   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:33:58.843173   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:33:58.873003   54335 cri.go:89] found id: ""
	I1205 06:33:58.873016   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.873024   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:33:58.873029   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:33:58.873087   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:33:58.898773   54335 cri.go:89] found id: ""
	I1205 06:33:58.898786   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.898793   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:33:58.898799   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:33:58.898857   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:33:58.923518   54335 cri.go:89] found id: ""
	I1205 06:33:58.923531   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.923538   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:33:58.923543   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:33:58.923601   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:33:58.947602   54335 cri.go:89] found id: ""
	I1205 06:33:58.947615   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.947622   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:33:58.947627   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:33:58.947685   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:33:58.972459   54335 cri.go:89] found id: ""
	I1205 06:33:58.972473   54335 logs.go:282] 0 containers: []
	W1205 06:33:58.972480   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:33:58.972488   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:33:58.972499   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:33:58.983301   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:33:58.983318   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:33:59.058445   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:33:59.050735   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.051328   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053133   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053776   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.054887   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:33:59.050735   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.051328   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053133   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.053776   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:33:59.054887   17564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:33:59.058455   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:33:59.058468   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:33:59.121838   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:33:59.121859   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:33:59.153321   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:33:59.153345   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:01.714396   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:34:01.724655   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:34:01.724715   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:34:01.749246   54335 cri.go:89] found id: ""
	I1205 06:34:01.749259   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.749267   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:34:01.749272   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:34:01.749332   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:34:01.774227   54335 cri.go:89] found id: ""
	I1205 06:34:01.774240   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.774247   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:34:01.774253   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:34:01.774309   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:34:01.799574   54335 cri.go:89] found id: ""
	I1205 06:34:01.799588   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.799595   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:34:01.799600   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:34:01.799659   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:34:01.824994   54335 cri.go:89] found id: ""
	I1205 06:34:01.825008   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.825015   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:34:01.825020   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:34:01.825084   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:34:01.854353   54335 cri.go:89] found id: ""
	I1205 06:34:01.854367   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.854374   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:34:01.854380   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:34:01.854440   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:34:01.880365   54335 cri.go:89] found id: ""
	I1205 06:34:01.880379   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.880386   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:34:01.880392   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:34:01.880458   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:34:01.906944   54335 cri.go:89] found id: ""
	I1205 06:34:01.906957   54335 logs.go:282] 0 containers: []
	W1205 06:34:01.906964   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:34:01.906972   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:34:01.906982   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:34:01.938155   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:34:01.938171   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:34:01.992877   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:34:01.992895   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:34:02.007261   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:34:02.007278   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:34:02.080660   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:34:02.072024   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073018   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073709   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.075294   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.076108   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:34:02.072024   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073018   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.073709   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.075294   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:34:02.076108   17682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:34:02.080669   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:34:02.080680   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:34:04.651581   54335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:34:04.661868   54335 kubeadm.go:602] duration metric: took 4m3.72973724s to restartPrimaryControlPlane
	W1205 06:34:04.661926   54335 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:34:04.661999   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:34:05.076526   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:34:05.090468   54335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:34:05.098831   54335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:34:05.098888   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:34:05.107168   54335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:34:05.107177   54335 kubeadm.go:158] found existing configuration files:
	
	I1205 06:34:05.107230   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:34:05.115256   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:34:05.115315   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:34:05.123163   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:34:05.130789   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:34:05.130850   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:34:05.138646   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:34:05.147024   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:34:05.147082   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:34:05.155378   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:34:05.163928   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:34:05.163985   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:34:05.171609   54335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:34:05.211033   54335 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:34:05.211109   54335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:34:05.279588   54335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:34:05.279653   54335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:34:05.279688   54335 kubeadm.go:319] OS: Linux
	I1205 06:34:05.279731   54335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:34:05.279778   54335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:34:05.279824   54335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:34:05.279876   54335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:34:05.279924   54335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:34:05.279971   54335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:34:05.280015   54335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:34:05.280062   54335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:34:05.280106   54335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:34:05.346565   54335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:34:05.346667   54335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:34:05.346756   54335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:34:05.352620   54335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:34:05.358148   54335 out.go:252]   - Generating certificates and keys ...
	I1205 06:34:05.358236   54335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:34:05.358307   54335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:34:05.358383   54335 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:34:05.358442   54335 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:34:05.358512   54335 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:34:05.358564   54335 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:34:05.358626   54335 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:34:05.358685   54335 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:34:05.358759   54335 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:34:05.358831   54335 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:34:05.358869   54335 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:34:05.358923   54335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:34:05.469895   54335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:34:05.573671   54335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:34:05.924291   54335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:34:06.081184   54335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:34:06.337744   54335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:34:06.338499   54335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:34:06.342999   54335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:34:06.346294   54335 out.go:252]   - Booting up control plane ...
	I1205 06:34:06.346403   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:34:06.346486   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:34:06.347115   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:34:06.367588   54335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:34:06.367869   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:34:06.375582   54335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:34:06.375840   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:34:06.375882   54335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:34:06.509639   54335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:34:06.509751   54335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:38:06.507887   54335 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288295s
	I1205 06:38:06.507910   54335 kubeadm.go:319] 
	I1205 06:38:06.508003   54335 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:38:06.508055   54335 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:38:06.508166   54335 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:38:06.508171   54335 kubeadm.go:319] 
	I1205 06:38:06.508290   54335 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:38:06.508326   54335 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:38:06.508363   54335 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:38:06.508367   54335 kubeadm.go:319] 
	I1205 06:38:06.511849   54335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:38:06.512286   54335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:38:06.512417   54335 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:38:06.512667   54335 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:38:06.512672   54335 kubeadm.go:319] 
	I1205 06:38:06.512746   54335 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 06:38:06.512894   54335 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288295s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:38:06.512983   54335 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 06:38:06.919674   54335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:38:06.932797   54335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:38:06.932850   54335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:38:06.940628   54335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:38:06.940637   54335 kubeadm.go:158] found existing configuration files:
	
	I1205 06:38:06.940686   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:38:06.948311   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:38:06.948364   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:38:06.955656   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:38:06.963182   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:38:06.963234   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:38:06.970398   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:38:06.978024   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:38:06.978085   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:38:06.985044   54335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:38:06.992736   54335 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:38:06.992788   54335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:38:07.000057   54335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:38:07.042188   54335 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:38:07.042482   54335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:38:07.116661   54335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:38:07.116719   54335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:38:07.116751   54335 kubeadm.go:319] OS: Linux
	I1205 06:38:07.116792   54335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:38:07.116836   54335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:38:07.116880   54335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:38:07.116923   54335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:38:07.116973   54335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:38:07.117018   54335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:38:07.117060   54335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:38:07.117104   54335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:38:07.117146   54335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:38:07.192664   54335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:38:07.192776   54335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:38:07.192871   54335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:38:07.201632   54335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:38:07.206982   54335 out.go:252]   - Generating certificates and keys ...
	I1205 06:38:07.207075   54335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:38:07.207145   54335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:38:07.207234   54335 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:38:07.207300   54335 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:38:07.207374   54335 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:38:07.207431   54335 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:38:07.207500   54335 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:38:07.207566   54335 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:38:07.207644   54335 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:38:07.207721   54335 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:38:07.207758   54335 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:38:07.207819   54335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:38:07.441757   54335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:38:07.738285   54335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:38:07.865941   54335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:38:08.382979   54335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:38:08.523706   54335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:38:08.524241   54335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:38:08.526890   54335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:38:08.530137   54335 out.go:252]   - Booting up control plane ...
	I1205 06:38:08.530240   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:38:08.530313   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:38:08.530379   54335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:38:08.552364   54335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:38:08.552467   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:38:08.559742   54335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:38:08.560021   54335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:38:08.560062   54335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:38:08.679099   54335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:38:08.679206   54335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:42:08.679850   54335 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117292s
	I1205 06:42:08.679871   54335 kubeadm.go:319] 
	I1205 06:42:08.679925   54335 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:42:08.679955   54335 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:42:08.680053   54335 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:42:08.680057   54335 kubeadm.go:319] 
	I1205 06:42:08.680155   54335 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:42:08.680184   54335 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:42:08.680212   54335 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:42:08.680215   54335 kubeadm.go:319] 
	I1205 06:42:08.683507   54335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:42:08.683930   54335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:42:08.684037   54335 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:42:08.684273   54335 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:42:08.684278   54335 kubeadm.go:319] 
	I1205 06:42:08.684346   54335 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:42:08.684393   54335 kubeadm.go:403] duration metric: took 12m7.791636767s to StartCluster
	I1205 06:42:08.684424   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:42:08.684483   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:42:08.708784   54335 cri.go:89] found id: ""
	I1205 06:42:08.708797   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.708804   54335 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:42:08.708809   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 06:42:08.708865   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:42:08.733583   54335 cri.go:89] found id: ""
	I1205 06:42:08.733596   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.733603   54335 logs.go:284] No container was found matching "etcd"
	I1205 06:42:08.733608   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 06:42:08.733670   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:42:08.762239   54335 cri.go:89] found id: ""
	I1205 06:42:08.762252   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.762259   54335 logs.go:284] No container was found matching "coredns"
	I1205 06:42:08.762264   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:42:08.762320   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:42:08.785696   54335 cri.go:89] found id: ""
	I1205 06:42:08.785708   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.785715   54335 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:42:08.785734   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:42:08.785790   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:42:08.810075   54335 cri.go:89] found id: ""
	I1205 06:42:08.810088   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.810096   54335 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:42:08.810100   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:42:08.810158   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:42:08.834276   54335 cri.go:89] found id: ""
	I1205 06:42:08.834289   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.834296   54335 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:42:08.834302   54335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 06:42:08.834358   54335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:42:08.858346   54335 cri.go:89] found id: ""
	I1205 06:42:08.858359   54335 logs.go:282] 0 containers: []
	W1205 06:42:08.858366   54335 logs.go:284] No container was found matching "kindnet"
	I1205 06:42:08.858374   54335 logs.go:123] Gathering logs for kubelet ...
	I1205 06:42:08.858383   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:42:08.913473   54335 logs.go:123] Gathering logs for dmesg ...
	I1205 06:42:08.913490   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:42:08.924092   54335 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:42:08.924108   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:42:08.996046   54335 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:42:08.988018   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.988849   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990388   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990679   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.992097   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:42:08.988018   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.988849   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990388   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.990679   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:42:08.992097   21447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:42:08.996056   54335 logs.go:123] Gathering logs for containerd ...
	I1205 06:42:08.996066   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 06:42:09.060557   54335 logs.go:123] Gathering logs for container status ...
	I1205 06:42:09.060575   54335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 06:42:09.093287   54335 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:42:09.093337   54335 out.go:285] * 
	W1205 06:42:09.093398   54335 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:42:09.093427   54335 out.go:285] * 
	W1205 06:42:09.096107   54335 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:42:09.099524   54335 out.go:203] 
	W1205 06:42:09.101056   54335 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117292s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:42:09.101108   54335 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:42:09.101134   54335 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:42:09.103029   54335 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145026672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145041688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145095498Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145105836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145128630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145145402Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145253415Z" level=info msg="runtime interface created"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145274027Z" level=info msg="created NRI interface"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145290905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145338700Z" level=info msg="Connect containerd service"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145722270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.146767640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165396800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165459980Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165493022Z" level=info msg="Start subscribing containerd event"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165540539Z" level=info msg="Start recovering state"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192890545Z" level=info msg="Start event monitor"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192942246Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192952470Z" level=info msg="Start streaming server"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192971760Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192981229Z" level=info msg="runtime interface starting up..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192987859Z" level=info msg="starting plugins..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192998526Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:29:59 functional-101526 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.194904270Z" level=info msg="containerd successfully booted in 0.069048s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:43:53.466294   22862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:43:53.466972   22862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:43:53.468799   22862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:43:53.469542   22862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:43:53.471156   22862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:43:53 up  1:26,  0 user,  load average: 0.19, 0.26, 0.37
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:43:49 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:50 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 456.
	Dec 05 06:43:50 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:50 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:50 functional-101526 kubelet[22747]: E1205 06:43:50.640653   22747 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:50 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:50 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:51 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 457.
	Dec 05 06:43:51 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:51 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:51 functional-101526 kubelet[22752]: E1205 06:43:51.395154   22752 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:51 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:51 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:52 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 458.
	Dec 05 06:43:52 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:52 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:52 functional-101526 kubelet[22757]: E1205 06:43:52.171048   22757 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:52 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:52 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:43:52 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 05 06:43:52 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:52 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:43:52 functional-101526 kubelet[22778]: E1205 06:43:52.927242   22778 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:43:52 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:43:52 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (341.726767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:42:27.068262    4192 retry.go:31] will retry after 3.547280682s: Temporary Error: Get "http://10.104.157.34": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:42:40.615804    4192 retry.go:31] will retry after 3.257894016s: Temporary Error: Get "http://10.104.157.34": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:42:53.874559    4192 retry.go:31] will retry after 4.950458312s: Temporary Error: Get "http://10.104.157.34": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1205 06:43:01.797260    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:43:08.826817    4192 retry.go:31] will retry after 9.223271887s: Temporary Error: Get "http://10.104.157.34": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:43:28.050398    4192 retry.go:31] will retry after 13.659780948s: Temporary Error: Get "http://10.104.157.34": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1205 06:46:04.870427    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (301.580511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (316.424293ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-101526 image load --daemon kicbase/echo-server:functional-101526 --alsologtostderr                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image ls                                                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image save kicbase/echo-server:functional-101526 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image rm kicbase/echo-server:functional-101526 --alsologtostderr                                                                              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image ls                                                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image ls                                                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image save --daemon kicbase/echo-server:functional-101526 --alsologtostderr                                                                   │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh            │ functional-101526 ssh sudo cat /etc/ssl/certs/4192.pem                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh            │ functional-101526 ssh sudo cat /usr/share/ca-certificates/4192.pem                                                                                              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh            │ functional-101526 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh            │ functional-101526 ssh sudo cat /etc/ssl/certs/41922.pem                                                                                                         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh            │ functional-101526 ssh sudo cat /usr/share/ca-certificates/41922.pem                                                                                             │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh            │ functional-101526 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh            │ functional-101526 ssh sudo cat /etc/test/nested/copy/4192/hosts                                                                                                 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image ls --format short --alsologtostderr                                                                                                     │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image ls --format yaml --alsologtostderr                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh            │ functional-101526 ssh pgrep buildkitd                                                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ image          │ functional-101526 image build -t localhost/my-image:functional-101526 testdata/build --alsologtostderr                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image ls                                                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image ls --format json --alsologtostderr                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ image          │ functional-101526 image ls --format table --alsologtostderr                                                                                                     │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ update-context │ functional-101526 update-context --alsologtostderr -v=2                                                                                                         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ update-context │ functional-101526 update-context --alsologtostderr -v=2                                                                                                         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ update-context │ functional-101526 update-context --alsologtostderr -v=2                                                                                                         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:44:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:44:08.633583   71453 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:44:08.633762   71453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.633793   71453 out.go:374] Setting ErrFile to fd 2...
	I1205 06:44:08.633814   71453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.634102   71453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:44:08.634499   71453 out.go:368] Setting JSON to false
	I1205 06:44:08.635318   71453 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5196,"bootTime":1764911853,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:44:08.635413   71453 start.go:143] virtualization:  
	I1205 06:44:08.638655   71453 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:44:08.642448   71453 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:44:08.642550   71453 notify.go:221] Checking for updates...
	I1205 06:44:08.648062   71453 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:44:08.651012   71453 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:44:08.653873   71453 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:44:08.656772   71453 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:44:08.659679   71453 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:44:08.662989   71453 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:08.663608   71453 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:44:08.693276   71453 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:44:08.693431   71453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:08.754172   71453 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.744884021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:44:08.754286   71453 docker.go:319] overlay module found
	I1205 06:44:08.759070   71453 out.go:179] * Using the docker driver based on existing profile
	I1205 06:44:08.761929   71453 start.go:309] selected driver: docker
	I1205 06:44:08.761955   71453 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:08.762046   71453 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:44:08.762156   71453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:08.814269   71453 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.805597876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:44:08.814712   71453 cni.go:84] Creating CNI manager for ""
	I1205 06:44:08.814781   71453 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:44:08.814823   71453 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:08.818107   71453 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:44:14 functional-101526 containerd[10262]: time="2025-12-05T06:44:14.978874263Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:44:14 functional-101526 containerd[10262]: time="2025-12-05T06:44:14.981235066Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-101526\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:44:16 functional-101526 containerd[10262]: time="2025-12-05T06:44:16.068143667Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-101526\""
	Dec 05 06:44:16 functional-101526 containerd[10262]: time="2025-12-05T06:44:16.070961057Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-101526\""
	Dec 05 06:44:16 functional-101526 containerd[10262]: time="2025-12-05T06:44:16.073152554Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 05 06:44:16 functional-101526 containerd[10262]: time="2025-12-05T06:44:16.082438636Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-101526\" returns successfully"
	Dec 05 06:44:16 functional-101526 containerd[10262]: time="2025-12-05T06:44:16.326608108Z" level=info msg="No images store for sha256:a7bf6fdf6c8cb2ce12a051ee0bc6bde4eb3f79bf26b0d243335100c74502d5b4"
	Dec 05 06:44:16 functional-101526 containerd[10262]: time="2025-12-05T06:44:16.328913985Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-101526\""
	Dec 05 06:44:16 functional-101526 containerd[10262]: time="2025-12-05T06:44:16.348370631Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:44:16 functional-101526 containerd[10262]: time="2025-12-05T06:44:16.348685177Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-101526\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:44:17 functional-101526 containerd[10262]: time="2025-12-05T06:44:17.113774307Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-101526\""
	Dec 05 06:44:17 functional-101526 containerd[10262]: time="2025-12-05T06:44:17.116263349Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-101526\""
	Dec 05 06:44:17 functional-101526 containerd[10262]: time="2025-12-05T06:44:17.119159789Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 05 06:44:17 functional-101526 containerd[10262]: time="2025-12-05T06:44:17.127061825Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-101526\" returns successfully"
	Dec 05 06:44:17 functional-101526 containerd[10262]: time="2025-12-05T06:44:17.764021132Z" level=info msg="No images store for sha256:e1848806ece2d0c329885558e1d9db974809851c92e5805071f36e352c25df82"
	Dec 05 06:44:17 functional-101526 containerd[10262]: time="2025-12-05T06:44:17.766146085Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-101526\""
	Dec 05 06:44:17 functional-101526 containerd[10262]: time="2025-12-05T06:44:17.772959757Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:44:17 functional-101526 containerd[10262]: time="2025-12-05T06:44:17.773630590Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-101526\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:44:24 functional-101526 containerd[10262]: time="2025-12-05T06:44:24.315957800Z" level=info msg="connecting to shim 761o95a6tmtkcw9jdkx9gobbx" address="unix:///run/containerd/s/29d930167e0566f1990efcf3b3f6354cd45a1ddf0a718347345537fa8a34e47c" namespace=k8s.io protocol=ttrpc version=3
	Dec 05 06:44:24 functional-101526 containerd[10262]: time="2025-12-05T06:44:24.407049926Z" level=info msg="shim disconnected" id=761o95a6tmtkcw9jdkx9gobbx namespace=k8s.io
	Dec 05 06:44:24 functional-101526 containerd[10262]: time="2025-12-05T06:44:24.407089820Z" level=info msg="cleaning up after shim disconnected" id=761o95a6tmtkcw9jdkx9gobbx namespace=k8s.io
	Dec 05 06:44:24 functional-101526 containerd[10262]: time="2025-12-05T06:44:24.407099699Z" level=info msg="cleaning up dead shim" id=761o95a6tmtkcw9jdkx9gobbx namespace=k8s.io
	Dec 05 06:44:24 functional-101526 containerd[10262]: time="2025-12-05T06:44:24.685544753Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-101526\""
	Dec 05 06:44:24 functional-101526 containerd[10262]: time="2025-12-05T06:44:24.691624707Z" level=info msg="ImageCreate event name:\"sha256:45cf4ce823e1a01fadb98ac71f8d70653ba75c4b5dfbb62d6d91fbec65562b30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 06:44:24 functional-101526 containerd[10262]: time="2025-12-05T06:44:24.691977071Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-101526\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:18.639566   25626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:18.640235   25626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:18.641652   25626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:18.642203   25626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:18.643702   25626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:46:18 up  1:28,  0 user,  load average: 0.50, 0.41, 0.42
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:46:15 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:16 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 05 06:46:16 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:16 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:16 functional-101526 kubelet[25501]: E1205 06:46:16.395697   25501 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:16 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:16 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:17 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 05 06:46:17 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:17 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:17 functional-101526 kubelet[25507]: E1205 06:46:17.140933   25507 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:17 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:17 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:17 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 05 06:46:17 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:17 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:17 functional-101526 kubelet[25528]: E1205 06:46:17.903162   25528 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:17 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:17 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:18 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 05 06:46:18 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:18 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:18 functional-101526 kubelet[25630]: E1205 06:46:18.653831   25630 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:18 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:18 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (316.607662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-101526 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-101526 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (62.28534ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-101526 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	        "Created": "2025-12-05T06:15:09.334287249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:15:09.400200427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
	        "LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
	        "Name": "/functional-101526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
	                "LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101526",
	                "Source": "/var/lib/docker/volumes/functional-101526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101526",
	                "name.minikube.sigs.k8s.io": "functional-101526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
	            "SandboxKey": "/var/run/docker/netns/4ee213ab313f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:24:89:4a:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
	                    "EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101526",
	                        "7d26b0b609d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 2 (318.312934ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-101526 service hello-node --url --format={{.IP}}                                                                                         │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ service   │ functional-101526 service hello-node --url                                                                                                          │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ ssh       │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001:/mount-9p --alsologtostderr -v=1              │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │                     │
	│ ssh       │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:43 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh -- ls -la /mount-9p                                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh cat /mount-9p/test-1764917039148435049                                                                                        │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh       │ functional-101526 ssh sudo umount -f /mount-9p                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo785096176/001:/mount-9p --alsologtostderr -v=1 --port 46464  │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh       │ functional-101526 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh -- ls -la /mount-9p                                                                                                           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh sudo umount -f /mount-9p                                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount1 --alsologtostderr -v=1                  │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount2 --alsologtostderr -v=1                  │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ mount     │ -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount3 --alsologtostderr -v=1                  │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ ssh       │ functional-101526 ssh findmnt -T /mount1                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh findmnt -T /mount2                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ ssh       │ functional-101526 ssh findmnt -T /mount3                                                                                                            │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │ 05 Dec 25 06:44 UTC │
	│ mount     │ -p functional-101526 --kill=true                                                                                                                    │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ start     │ -p functional-101526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ start     │ -p functional-101526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ start     │ -p functional-101526 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-101526 --alsologtostderr -v=1                                                                                      │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:44 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:44:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:44:08.633583   71453 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:44:08.633762   71453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.633793   71453 out.go:374] Setting ErrFile to fd 2...
	I1205 06:44:08.633814   71453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.634102   71453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:44:08.634499   71453 out.go:368] Setting JSON to false
	I1205 06:44:08.635318   71453 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5196,"bootTime":1764911853,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:44:08.635413   71453 start.go:143] virtualization:  
	I1205 06:44:08.638655   71453 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:44:08.642448   71453 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:44:08.642550   71453 notify.go:221] Checking for updates...
	I1205 06:44:08.648062   71453 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:44:08.651012   71453 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:44:08.653873   71453 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:44:08.656772   71453 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:44:08.659679   71453 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:44:08.662989   71453 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:08.663608   71453 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:44:08.693276   71453 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:44:08.693431   71453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:08.754172   71453 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.744884021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:44:08.754286   71453 docker.go:319] overlay module found
	I1205 06:44:08.759070   71453 out.go:179] * Using the docker driver based on existing profile
	I1205 06:44:08.761929   71453 start.go:309] selected driver: docker
	I1205 06:44:08.761955   71453 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:08.762046   71453 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:44:08.762156   71453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:08.814269   71453 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.805597876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:44:08.814712   71453 cni.go:84] Creating CNI manager for ""
	I1205 06:44:08.814781   71453 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:44:08.814823   71453 start.go:353] cluster config:
	{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:08.818107   71453 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145026672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145041688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145095498Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145105836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145128630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145145402Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145253415Z" level=info msg="runtime interface created"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145274027Z" level=info msg="created NRI interface"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145290905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145338700Z" level=info msg="Connect containerd service"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.145722270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.146767640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165396800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165459980Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165493022Z" level=info msg="Start subscribing containerd event"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.165540539Z" level=info msg="Start recovering state"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192890545Z" level=info msg="Start event monitor"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192942246Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192952470Z" level=info msg="Start streaming server"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192971760Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192981229Z" level=info msg="runtime interface starting up..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192987859Z" level=info msg="starting plugins..."
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.192998526Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:29:59 functional-101526 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 05 06:29:59 functional-101526 containerd[10262]: time="2025-12-05T06:29:59.194904270Z" level=info msg="containerd successfully booted in 0.069048s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:44:11.542702   23849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:11.543118   23849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:11.544656   23849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:11.545002   23849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:44:11.546482   23849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 06:44:11 up  1:26,  0 user,  load average: 0.90, 0.41, 0.42
	Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 05 06:44:08 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:08 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:08 functional-101526 kubelet[23603]: E1205 06:44:08.906544   23603 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:08 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:09 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 05 06:44:09 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:09 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:09 functional-101526 kubelet[23632]: E1205 06:44:09.666694   23632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:09 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:09 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:10 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 05 06:44:10 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:10 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:10 functional-101526 kubelet[23731]: E1205 06:44:10.405506   23731 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:10 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:10 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:44:11 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 05 06:44:11 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:11 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:44:11 functional-101526 kubelet[23767]: E1205 06:44:11.171488   23767 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:44:11 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:44:11 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 2 (325.935262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1205 06:42:16.516824   67335 out.go:360] Setting OutFile to fd 1 ...
I1205 06:42:16.523186   67335 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:42:16.523212   67335 out.go:374] Setting ErrFile to fd 2...
I1205 06:42:16.523219   67335 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:42:16.523564   67335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:42:16.523915   67335 mustload.go:66] Loading cluster: functional-101526
I1205 06:42:16.524420   67335 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:42:16.525148   67335 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:42:16.569300   67335 host.go:66] Checking if "functional-101526" exists ...
I1205 06:42:16.569691   67335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 06:42:16.665631   67335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:42:16.649224391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1205 06:42:16.665760   67335 api_server.go:166] Checking apiserver status ...
I1205 06:42:16.665817   67335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 06:42:16.665867   67335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:42:16.710049   67335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
W1205 06:42:16.837974   67335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1205 06:42:16.841317   67335 out.go:179] * The control-plane node functional-101526 apiserver is not running: (state=Stopped)
I1205 06:42:16.844377   67335 out.go:179]   To start a cluster, run: "minikube start -p functional-101526"

                                                
                                                
stdout: * The control-plane node functional-101526 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-101526"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 67336: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-101526 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-101526 apply -f testdata/testsvc.yaml: exit status 1 (100.485074ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-101526 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (94.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.104.157.34": Temporary Error: Get "http://10.104.157.34": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-101526 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-101526 get svc nginx-svc: exit status 1 (61.117797ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-101526 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (94.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-101526 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-101526 create deployment hello-node --image kicbase/echo-server: exit status 1 (69.292033ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-101526 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 service list: exit status 103 (263.630617ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-101526 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-101526"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-101526 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-101526 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-101526\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 service list -o json: exit status 103 (253.019336ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-101526 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-101526"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-101526 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 service --namespace=default --https --url hello-node: exit status 103 (256.541554ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-101526 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-101526"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-101526 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 service hello-node --url --format={{.IP}}: exit status 103 (259.676608ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-101526 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-101526"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-101526 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-101526 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-101526\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 service hello-node --url: exit status 103 (255.753984ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-101526 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-101526"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-101526 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-101526 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-101526"
functional_test.go:1579: failed to parse "* The control-plane node functional-101526 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-101526\"": parse "* The control-plane node functional-101526 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-101526\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764917039148435049" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764917039148435049" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764917039148435049" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001/test-1764917039148435049
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.750439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:43:59.468458    4192 retry.go:31] will retry after 379.834206ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 06:43 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 06:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 06:43 test-1764917039148435049
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh cat /mount-9p/test-1764917039148435049
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-101526 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-101526 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (63.139939ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-101526 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (266.36913ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=37903)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  5 06:43 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  5 06:43 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  5 06:43 test-1764917039148435049
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-101526 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:37903
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001:/mount-9p --alsologtostderr -v=1] stderr:
I1205 06:43:59.211291   69580 out.go:360] Setting OutFile to fd 1 ...
I1205 06:43:59.211478   69580 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:43:59.211484   69580 out.go:374] Setting ErrFile to fd 2...
I1205 06:43:59.211488   69580 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:43:59.211747   69580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:43:59.211995   69580 mustload.go:66] Loading cluster: functional-101526
I1205 06:43:59.212341   69580 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:43:59.212860   69580 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:43:59.234624   69580 host.go:66] Checking if "functional-101526" exists ...
I1205 06:43:59.234963   69580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 06:43:59.334844   69580 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:43:59.325515155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1205 06:43:59.335002   69580 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 06:43:59.373276   69580 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001 into VM as /mount-9p ...
I1205 06:43:59.376133   69580 out.go:179]   - Mount type:   9p
I1205 06:43:59.378994   69580 out.go:179]   - User ID:      docker
I1205 06:43:59.381874   69580 out.go:179]   - Group ID:     docker
I1205 06:43:59.384838   69580 out.go:179]   - Version:      9p2000.L
I1205 06:43:59.387559   69580 out.go:179]   - Message Size: 262144
I1205 06:43:59.390503   69580 out.go:179]   - Options:      map[]
I1205 06:43:59.393384   69580 out.go:179]   - Bind Address: 192.168.49.1:37903
I1205 06:43:59.396230   69580 out.go:179] * Userspace file server: 
I1205 06:43:59.396681   69580 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1205 06:43:59.396785   69580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:43:59.416500   69580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:43:59.523845   69580 mount.go:180] unmount for /mount-9p ran successfully
I1205 06:43:59.523873   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1205 06:43:59.532153   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37903,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1205 06:43:59.542447   69580 main.go:127] stdlog: ufs.go:141 connected
I1205 06:43:59.542605   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tversion tag 65535 msize 262144 version '9P2000.L'
I1205 06:43:59.542651   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rversion tag 65535 msize 262144 version '9P2000'
I1205 06:43:59.542878   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1205 06:43:59.542940   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rattach tag 0 aqid (f16104 ed40d026 'd')
I1205 06:43:59.543585   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 0
I1205 06:43:59.543635   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16104 ed40d026 'd') m d775 at 0 mt 1764917039 l 4096 t 0 d 0 ext )
I1205 06:43:59.550258   69580 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/.mount-process: {Name:mk3b22575aa94a20fa300d84985853dd72ce5824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:43:59.550451   69580 mount.go:105] mount successful: ""
I1205 06:43:59.553922   69580 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3678766434/001 to /mount-9p
I1205 06:43:59.556731   69580 out.go:203] 
I1205 06:43:59.559585   69580 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1205 06:44:00.697878   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 0
I1205 06:44:00.697974   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16104 ed40d026 'd') m d775 at 0 mt 1764917039 l 4096 t 0 d 0 ext )
I1205 06:44:00.698349   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 1 
I1205 06:44:00.698386   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 
I1205 06:44:00.698520   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Topen tag 0 fid 1 mode 0
I1205 06:44:00.698568   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Ropen tag 0 qid (f16104 ed40d026 'd') iounit 0
I1205 06:44:00.698687   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 0
I1205 06:44:00.698725   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16104 ed40d026 'd') m d775 at 0 mt 1764917039 l 4096 t 0 d 0 ext )
I1205 06:44:00.698904   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 1 offset 0 count 262120
I1205 06:44:00.699022   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 258
I1205 06:44:00.699168   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 1 offset 258 count 261862
I1205 06:44:00.699197   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 0
I1205 06:44:00.699324   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 1 offset 258 count 262120
I1205 06:44:00.699348   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 0
I1205 06:44:00.699489   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1205 06:44:00.699528   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 (f16106 ed40d026 '') 
I1205 06:44:00.699653   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:00.699691   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (f16106 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:00.699837   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:00.699869   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (f16106 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:00.700009   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 2
I1205 06:44:00.700034   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:00.700162   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 2 0:'test-1764917039148435049' 
I1205 06:44:00.700201   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 (f16108 ed40d026 '') 
I1205 06:44:00.700332   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:00.700368   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('test-1764917039148435049' 'jenkins' 'jenkins' '' q (f16108 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:00.700490   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:00.700521   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('test-1764917039148435049' 'jenkins' 'jenkins' '' q (f16108 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:00.700651   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 2
I1205 06:44:00.700678   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:00.700817   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1205 06:44:00.700855   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 (f16107 ed40d026 '') 
I1205 06:44:00.700976   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:00.701008   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (f16107 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:00.701131   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:00.701194   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (f16107 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:00.701513   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 2
I1205 06:44:00.701552   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:00.701714   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 1 offset 258 count 262120
I1205 06:44:00.701754   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 0
I1205 06:44:00.701893   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 1
I1205 06:44:00.701928   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:00.973500   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 1 0:'test-1764917039148435049' 
I1205 06:44:00.973575   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 (f16108 ed40d026 '') 
I1205 06:44:00.973762   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 1
I1205 06:44:00.973808   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('test-1764917039148435049' 'jenkins' 'jenkins' '' q (f16108 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:00.973971   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 1 newfid 2 
I1205 06:44:00.974001   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 
I1205 06:44:00.974117   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Topen tag 0 fid 2 mode 0
I1205 06:44:00.974165   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Ropen tag 0 qid (f16108 ed40d026 '') iounit 0
I1205 06:44:00.974290   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 1
I1205 06:44:00.974322   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('test-1764917039148435049' 'jenkins' 'jenkins' '' q (f16108 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:00.974493   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 2 offset 0 count 262120
I1205 06:44:00.974601   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 24
I1205 06:44:00.974755   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 2 offset 24 count 262120
I1205 06:44:00.974799   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 0
I1205 06:44:00.974974   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 2 offset 24 count 262120
I1205 06:44:00.975021   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 0
I1205 06:44:00.975226   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 2
I1205 06:44:00.975274   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:00.975446   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 1
I1205 06:44:00.975474   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:01.307876   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 0
I1205 06:44:01.307972   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16104 ed40d026 'd') m d775 at 0 mt 1764917039 l 4096 t 0 d 0 ext )
I1205 06:44:01.308353   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 1 
I1205 06:44:01.308390   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 
I1205 06:44:01.308526   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Topen tag 0 fid 1 mode 0
I1205 06:44:01.308578   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Ropen tag 0 qid (f16104 ed40d026 'd') iounit 0
I1205 06:44:01.308715   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 0
I1205 06:44:01.308749   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (f16104 ed40d026 'd') m d775 at 0 mt 1764917039 l 4096 t 0 d 0 ext )
I1205 06:44:01.308892   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 1 offset 0 count 262120
I1205 06:44:01.309004   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 258
I1205 06:44:01.309194   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 1 offset 258 count 261862
I1205 06:44:01.309243   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 0
I1205 06:44:01.309389   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 1 offset 258 count 262120
I1205 06:44:01.309427   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 0
I1205 06:44:01.309550   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1205 06:44:01.309609   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 (f16106 ed40d026 '') 
I1205 06:44:01.309737   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:01.309773   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (f16106 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:01.309901   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:01.309969   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (f16106 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:01.310107   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 2
I1205 06:44:01.310142   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:01.310268   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 2 0:'test-1764917039148435049' 
I1205 06:44:01.310342   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 (f16108 ed40d026 '') 
I1205 06:44:01.310477   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:01.310514   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('test-1764917039148435049' 'jenkins' 'jenkins' '' q (f16108 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:01.310642   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:01.310675   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('test-1764917039148435049' 'jenkins' 'jenkins' '' q (f16108 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:01.310802   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 2
I1205 06:44:01.310824   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:01.310951   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1205 06:44:01.310987   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rwalk tag 0 (f16107 ed40d026 '') 
I1205 06:44:01.311119   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:01.311151   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (f16107 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:01.311278   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tstat tag 0 fid 2
I1205 06:44:01.311318   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (f16107 ed40d026 '') m 644 at 0 mt 1764917039 l 24 t 0 d 0 ext )
I1205 06:44:01.311441   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 2
I1205 06:44:01.311465   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:01.311575   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tread tag 0 fid 1 offset 258 count 262120
I1205 06:44:01.311609   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rread tag 0 count 0
I1205 06:44:01.311778   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 1
I1205 06:44:01.311821   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:01.312996   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1205 06:44:01.313055   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rerror tag 0 ename 'file not found' ecode 0
I1205 06:44:01.599858   69580 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50502 Tclunk tag 0 fid 0
I1205 06:44:01.599920   69580 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50502 Rclunk tag 0
I1205 06:44:01.601216   69580 main.go:127] stdlog: ufs.go:147 disconnected
I1205 06:44:01.623295   69580 out.go:179] * Unmounting /mount-9p ...
I1205 06:44:01.626310   69580 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1205 06:44:01.633506   69580 mount.go:180] unmount for /mount-9p ran successfully
I1205 06:44:01.633649   69580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/.mount-process: {Name:mk3b22575aa94a20fa300d84985853dd72ce5824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:44:01.636808   69580 out.go:203] 
W1205 06:44:01.639829   69580 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1205 06:44:01.642719   69580 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (789.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-496233 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1205 07:14:14.019402    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-496233 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.285564619s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-496233
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-496233: (1.346702841s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-496233 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-496233 status --format={{.Host}}: exit status 7 (71.648509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-496233 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-496233 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m27.613903808s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-496233] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-496233" primary control-plane node in "kubernetes-upgrade-496233" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:14:34.353795  201585 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:14:34.353911  201585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:14:34.353925  201585 out.go:374] Setting ErrFile to fd 2...
	I1205 07:14:34.353929  201585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:14:34.354188  201585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:14:34.354543  201585 out.go:368] Setting JSON to false
	I1205 07:14:34.355418  201585 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7021,"bootTime":1764911853,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:14:34.355493  201585 start.go:143] virtualization:  
	I1205 07:14:34.358521  201585 out.go:179] * [kubernetes-upgrade-496233] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:14:34.361977  201585 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:14:34.362119  201585 notify.go:221] Checking for updates...
	I1205 07:14:34.367507  201585 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:14:34.371160  201585 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:14:34.373814  201585 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:14:34.376625  201585 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:14:34.379341  201585 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:14:34.382505  201585 config.go:182] Loaded profile config "kubernetes-upgrade-496233": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1205 07:14:34.383070  201585 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:14:34.415576  201585 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:14:34.415741  201585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:14:34.473580  201585 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:14:34.463959579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:14:34.473689  201585 docker.go:319] overlay module found
	I1205 07:14:34.476746  201585 out.go:179] * Using the docker driver based on existing profile
	I1205 07:14:34.479530  201585 start.go:309] selected driver: docker
	I1205 07:14:34.479551  201585 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-496233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-496233 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:14:34.479643  201585 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:14:34.480373  201585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:14:34.530149  201585 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:14:34.52113626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:14:34.530484  201585 cni.go:84] Creating CNI manager for ""
	I1205 07:14:34.530557  201585 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:14:34.530606  201585 start.go:353] cluster config:
	{Name:kubernetes-upgrade-496233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-496233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:14:34.533796  201585 out.go:179] * Starting "kubernetes-upgrade-496233" primary control-plane node in "kubernetes-upgrade-496233" cluster
	I1205 07:14:34.536689  201585 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:14:34.539679  201585 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:14:34.542763  201585 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:14:34.542849  201585 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:14:34.562518  201585 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:14:34.562540  201585 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:14:34.604256  201585 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:14:34.773537  201585 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:14:34.773681  201585 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/config.json ...
	I1205 07:14:34.773799  201585 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.773889  201585 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:14:34.773898  201585 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.368µs
	I1205 07:14:34.773910  201585 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:14:34.773914  201585 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:14:34.773922  201585 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.773939  201585 start.go:360] acquireMachinesLock for kubernetes-upgrade-496233: {Name:mk06cf3f5b0cf9df201cba8bb44188ac7e48b593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.773952  201585 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:14:34.773957  201585 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.079µs
	I1205 07:14:34.773963  201585 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:14:34.773975  201585 start.go:364] duration metric: took 23.827µs to acquireMachinesLock for "kubernetes-upgrade-496233"
	I1205 07:14:34.773973  201585 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.773988  201585 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:14:34.773993  201585 fix.go:54] fixHost starting: 
	I1205 07:14:34.774000  201585 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:14:34.774005  201585 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.863µs
	I1205 07:14:34.774012  201585 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:14:34.774022  201585 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.774048  201585 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:14:34.774054  201585 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 32.788µs
	I1205 07:14:34.774060  201585 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:14:34.774077  201585 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.774107  201585 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:14:34.774112  201585 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 36.899µs
	I1205 07:14:34.774123  201585 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:14:34.774136  201585 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.774174  201585 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:14:34.774180  201585 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 48.501µs
	I1205 07:14:34.774186  201585 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:14:34.774196  201585 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.774224  201585 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:14:34.774228  201585 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 33.945µs
	I1205 07:14:34.774233  201585 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:14:34.774244  201585 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:14:34.774262  201585 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-496233 --format={{.State.Status}}
	I1205 07:14:34.774276  201585 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:14:34.774282  201585 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.926µs
	I1205 07:14:34.774287  201585 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:14:34.774295  201585 cache.go:87] Successfully saved all images to host disk.
	I1205 07:14:34.791594  201585 fix.go:112] recreateIfNeeded on kubernetes-upgrade-496233: state=Stopped err=<nil>
	W1205 07:14:34.791623  201585 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:14:34.795145  201585 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-496233" ...
	I1205 07:14:34.795223  201585 cli_runner.go:164] Run: docker start kubernetes-upgrade-496233
	I1205 07:14:35.073566  201585 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-496233 --format={{.State.Status}}
	I1205 07:14:35.094747  201585 kic.go:430] container "kubernetes-upgrade-496233" state is running.
	I1205 07:14:35.095133  201585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-496233
	I1205 07:14:35.120703  201585 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/config.json ...
	I1205 07:14:35.121938  201585 machine.go:94] provisionDockerMachine start ...
	I1205 07:14:35.122043  201585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-496233
	I1205 07:14:35.147876  201585 main.go:143] libmachine: Using SSH client type: native
	I1205 07:14:35.148221  201585 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1205 07:14:35.148229  201585 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:14:35.148875  201585 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60708->127.0.0.1:33013: read: connection reset by peer
	I1205 07:14:38.320999  201585 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-496233
	
	I1205 07:14:38.321022  201585 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-496233"
	I1205 07:14:38.321091  201585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-496233
	I1205 07:14:38.339258  201585 main.go:143] libmachine: Using SSH client type: native
	I1205 07:14:38.339584  201585 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1205 07:14:38.339596  201585 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-496233 && echo "kubernetes-upgrade-496233" | sudo tee /etc/hostname
	I1205 07:14:38.507676  201585 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-496233
	
	I1205 07:14:38.507869  201585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-496233
	I1205 07:14:38.526919  201585 main.go:143] libmachine: Using SSH client type: native
	I1205 07:14:38.527230  201585 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1205 07:14:38.527253  201585 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-496233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-496233/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-496233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:14:38.677327  201585 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:14:38.677355  201585 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:14:38.677384  201585 ubuntu.go:190] setting up certificates
	I1205 07:14:38.677394  201585 provision.go:84] configureAuth start
	I1205 07:14:38.677456  201585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-496233
	I1205 07:14:38.705604  201585 provision.go:143] copyHostCerts
	I1205 07:14:38.705700  201585 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:14:38.705714  201585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:14:38.705822  201585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:14:38.705966  201585 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:14:38.705979  201585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:14:38.706019  201585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:14:38.706122  201585 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:14:38.706136  201585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:14:38.706166  201585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:14:38.706239  201585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-496233 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-496233 localhost minikube]
	I1205 07:14:39.042012  201585 provision.go:177] copyRemoteCerts
	I1205 07:14:39.042088  201585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:14:39.042164  201585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-496233
	I1205 07:14:39.060815  201585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/kubernetes-upgrade-496233/id_rsa Username:docker}
	I1205 07:14:39.165936  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:14:39.188965  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 07:14:39.208330  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:14:39.227451  201585 provision.go:87] duration metric: took 550.029467ms to configureAuth
	I1205 07:14:39.227479  201585 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:14:39.227718  201585 config.go:182] Loaded profile config "kubernetes-upgrade-496233": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:14:39.227733  201585 machine.go:97] duration metric: took 4.10577023s to provisionDockerMachine
	I1205 07:14:39.227754  201585 start.go:293] postStartSetup for "kubernetes-upgrade-496233" (driver="docker")
	I1205 07:14:39.227772  201585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:14:39.227837  201585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:14:39.227900  201585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-496233
	I1205 07:14:39.246422  201585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/kubernetes-upgrade-496233/id_rsa Username:docker}
	I1205 07:14:39.352923  201585 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:14:39.356338  201585 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:14:39.356367  201585 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:14:39.356378  201585 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:14:39.356436  201585 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:14:39.356515  201585 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:14:39.356644  201585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:14:39.364858  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:14:39.382994  201585 start.go:296] duration metric: took 155.217713ms for postStartSetup
	I1205 07:14:39.383104  201585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:14:39.383167  201585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-496233
	I1205 07:14:39.400950  201585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/kubernetes-upgrade-496233/id_rsa Username:docker}
	I1205 07:14:39.502998  201585 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:14:39.507794  201585 fix.go:56] duration metric: took 4.733793775s for fixHost
	I1205 07:14:39.507819  201585 start.go:83] releasing machines lock for "kubernetes-upgrade-496233", held for 4.733835268s
	I1205 07:14:39.507885  201585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-496233
	I1205 07:14:39.524353  201585 ssh_runner.go:195] Run: cat /version.json
	I1205 07:14:39.524401  201585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-496233
	I1205 07:14:39.524447  201585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:14:39.524499  201585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-496233
	I1205 07:14:39.542702  201585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/kubernetes-upgrade-496233/id_rsa Username:docker}
	I1205 07:14:39.550785  201585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/kubernetes-upgrade-496233/id_rsa Username:docker}
	I1205 07:14:39.758246  201585 ssh_runner.go:195] Run: systemctl --version
	I1205 07:14:39.764813  201585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:14:39.770204  201585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:14:39.770274  201585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:14:39.780597  201585 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:14:39.780621  201585 start.go:496] detecting cgroup driver to use...
	I1205 07:14:39.780652  201585 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:14:39.780707  201585 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:14:39.799018  201585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:14:39.811774  201585 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:14:39.811832  201585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:14:39.827808  201585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:14:39.840674  201585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:14:39.953259  201585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:14:40.081630  201585 docker.go:234] disabling docker service ...
	I1205 07:14:40.081747  201585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:14:40.097832  201585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:14:40.111586  201585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:14:40.233915  201585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:14:40.359578  201585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:14:40.372658  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:14:40.386992  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:14:40.396558  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:14:40.405651  201585 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:14:40.405801  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:14:40.414813  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:14:40.423675  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:14:40.432658  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:14:40.441558  201585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:14:40.449616  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:14:40.458471  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:14:40.467589  201585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:14:40.476923  201585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:14:40.484513  201585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:14:40.491933  201585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:14:40.607446  201585 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:14:40.756067  201585 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:14:40.756171  201585 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:14:40.760042  201585 start.go:564] Will wait 60s for crictl version
	I1205 07:14:40.760123  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:40.763587  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:14:40.790044  201585 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:14:40.790148  201585 ssh_runner.go:195] Run: containerd --version
	I1205 07:14:40.812200  201585 ssh_runner.go:195] Run: containerd --version
	I1205 07:14:40.838998  201585 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:14:40.841970  201585 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-496233 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:14:40.865404  201585 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:14:40.869363  201585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:14:40.879724  201585 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-496233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-496233 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:14:40.879852  201585 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:14:40.879919  201585 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:14:40.904530  201585 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:14:40.904596  201585 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:14:40.904693  201585 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:14:40.904774  201585 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:14:40.904949  201585 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:14:40.904991  201585 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:14:40.905058  201585 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:14:40.904688  201585 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:14:40.905230  201585 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:14:40.905140  201585 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:14:40.907201  201585 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:14:40.908139  201585 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:14:40.908620  201585 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:14:40.908842  201585 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:14:40.909069  201585 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:14:40.909211  201585 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:14:40.909343  201585 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:14:40.909455  201585 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:14:41.218343  201585 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:14:41.218442  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:14:41.220362  201585 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:14:41.220426  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:14:41.227620  201585 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:14:41.227710  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:14:41.228865  201585 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:14:41.228932  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:14:41.277362  201585 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:14:41.277480  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:14:41.283769  201585 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:14:41.283813  201585 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:14:41.283860  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:41.290691  201585 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:14:41.290732  201585 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:14:41.290784  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:41.298380  201585 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:14:41.298430  201585 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:14:41.298478  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:41.298551  201585 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:14:41.298569  201585 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:14:41.298600  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:41.302503  201585 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:14:41.302571  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:14:41.315700  201585 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:14:41.315804  201585 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:14:41.315898  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:41.316025  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:14:41.316147  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:14:41.316257  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:14:41.316375  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:14:41.341460  201585 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:14:41.341548  201585 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:14:41.341627  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:41.342696  201585 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:14:41.342760  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:14:41.395095  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:14:41.395210  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:14:41.395311  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:14:41.395383  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:14:41.395445  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:14:41.395448  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:14:41.395503  201585 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:14:41.395537  201585 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:14:41.395579  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:41.454188  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:14:41.468829  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:14:41.510609  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:14:41.510715  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:14:41.510752  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:14:41.510865  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:14:41.510948  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:14:41.511038  201585 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:14:41.511143  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:14:41.538327  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:14:41.606364  201585 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:14:41.606672  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:14:41.606463  201585 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:14:41.606847  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:14:41.606494  201585 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:14:41.606975  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:14:41.606516  201585 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:14:41.607090  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:14:41.606582  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:14:41.606616  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:14:41.649016  201585 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:14:41.649186  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:14:41.690488  201585 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:14:41.690566  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:14:41.690681  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:14:41.690759  201585 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:14:41.690788  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:14:41.690860  201585 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:14:41.690886  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 07:14:41.690957  201585 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:14:41.691053  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:14:41.691132  201585 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:14:41.691167  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:14:41.776793  201585 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:14:41.777235  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:14:41.776995  201585 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:14:41.777624  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:14:41.802060  201585 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:14:41.802184  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:14:41.848434  201585 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:14:41.848519  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:14:42.088027  201585 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	W1205 07:14:42.184206  201585 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:14:42.184438  201585 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:14:42.184529  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:14:42.370528  201585 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:14:42.370599  201585 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:14:42.370963  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:42.400722  201585 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:14:42.400849  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:14:42.406026  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:14:43.840003  201585 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.439095661s)
	I1205 07:14:43.840036  201585 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:14:43.840053  201585 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:14:43.840101  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:14:43.840203  201585 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.433622891s)
	I1205 07:14:43.840214  201585 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:14:43.840272  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:14:44.626132  201585 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:14:44.626171  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:14:44.626242  201585 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:14:44.626309  201585 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:14:44.626377  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:14:45.788183  201585 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.161778231s)
	I1205 07:14:45.788214  201585 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:14:45.788232  201585 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:14:45.788282  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:14:47.319600  201585 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.531282082s)
	I1205 07:14:47.319629  201585 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:14:47.319673  201585 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:14:47.319745  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:14:48.327743  201585 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.007969606s)
	I1205 07:14:48.327771  201585 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:14:48.327789  201585 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:14:48.327837  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:14:49.355321  201585 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.027462358s)
	I1205 07:14:49.355345  201585 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:14:49.355361  201585 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:14:49.355418  201585 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:14:49.745665  201585 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:14:49.745697  201585 cache_images.go:125] Successfully loaded all cached images
	I1205 07:14:49.745702  201585 cache_images.go:94] duration metric: took 8.841080415s to LoadCachedImages
	I1205 07:14:49.745714  201585 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:14:49.745841  201585 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-496233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-496233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:14:49.745903  201585 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:14:49.771640  201585 cni.go:84] Creating CNI manager for ""
	I1205 07:14:49.771667  201585 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:14:49.771685  201585 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:14:49.771709  201585 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-496233 NodeName:kubernetes-upgrade-496233 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:14:49.771874  201585 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-496233"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:14:49.771948  201585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:14:49.781142  201585 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:14:49.781257  201585 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:14:49.790919  201585 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:14:49.791007  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:14:49.791097  201585 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:14:49.791129  201585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:14:49.791208  201585 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:14:49.791260  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:14:49.816979  201585 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:14:49.817013  201585 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:14:49.817036  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:14:49.817014  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:14:49.817102  201585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:14:49.837334  201585 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:14:49.837375  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:14:50.700732  201585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:14:50.711224  201585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1205 07:14:50.726046  201585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:14:50.740490  201585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1205 07:14:50.755958  201585 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:14:50.760407  201585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:14:50.771691  201585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:14:50.892785  201585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:14:50.912572  201585 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233 for IP: 192.168.76.2
	I1205 07:14:50.912593  201585 certs.go:195] generating shared ca certs ...
	I1205 07:14:50.912609  201585 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:14:50.912754  201585 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:14:50.912804  201585 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:14:50.912815  201585 certs.go:257] generating profile certs ...
	I1205 07:14:50.912906  201585 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/client.key
	I1205 07:14:50.912977  201585 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/apiserver.key.1b94d019
	I1205 07:14:50.913020  201585 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/proxy-client.key
	I1205 07:14:50.913140  201585 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:14:50.913201  201585 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:14:50.913215  201585 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:14:50.913248  201585 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:14:50.913277  201585 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:14:50.913306  201585 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:14:50.913360  201585 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:14:50.913991  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:14:50.938091  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:14:50.960141  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:14:50.987498  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:14:51.012452  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 07:14:51.033482  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:14:51.052913  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:14:51.072674  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:14:51.093247  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:14:51.112713  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:14:51.133241  201585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:14:51.152470  201585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:14:51.167259  201585 ssh_runner.go:195] Run: openssl version
	I1205 07:14:51.177985  201585 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:14:51.189259  201585 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:14:51.199103  201585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:14:51.204480  201585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:14:51.204574  201585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:14:51.246561  201585 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:14:51.254745  201585 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:14:51.263028  201585 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:14:51.271653  201585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:14:51.276288  201585 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:14:51.276358  201585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:14:51.318669  201585 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:14:51.326977  201585 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:14:51.335029  201585 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:14:51.343399  201585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:14:51.349951  201585 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:14:51.350016  201585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:14:51.391521  201585 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:14:51.399607  201585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:14:51.405015  201585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:14:51.447003  201585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:14:51.488868  201585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:14:51.531008  201585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:14:51.575862  201585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:14:51.617973  201585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:14:51.660539  201585 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-496233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-496233 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:14:51.660641  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:14:51.660741  201585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:14:51.697890  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:14:51.697950  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:14:51.697968  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:14:51.697984  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:14:51.698010  201585 cri.go:89] found id: ""
	I1205 07:14:51.698093  201585 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1205 07:14:51.724297  201585 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:14:51Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1205 07:14:51.724421  201585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:14:51.733609  201585 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:14:51.733666  201585 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:14:51.733727  201585 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:14:51.744748  201585 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:14:51.745393  201585 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-496233" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:14:51.745656  201585 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-496233" cluster setting kubeconfig missing "kubernetes-upgrade-496233" context setting]
	I1205 07:14:51.746114  201585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:14:51.746769  201585 kapi.go:59] client config for kubernetes-upgrade-496233: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kubernetes-upgrade-496233/client.key", CAFile:"/home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 07:14:51.747312  201585 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 07:14:51.747330  201585 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 07:14:51.747335  201585 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 07:14:51.747340  201585 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 07:14:51.747345  201585 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 07:14:51.747601  201585 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:14:51.759323  201585 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 07:14:12.552681649 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 07:14:50.753071262 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-496233"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1205 07:14:51.759385  201585 kubeadm.go:1161] stopping kube-system containers ...
	I1205 07:14:51.759410  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1205 07:14:51.759494  201585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:14:51.795999  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:14:51.796023  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:14:51.796028  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:14:51.796031  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:14:51.796035  201585 cri.go:89] found id: ""
	I1205 07:14:51.796056  201585 cri.go:252] Stopping containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:14:51.796113  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:14:51.800948  201585 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2
	I1205 07:14:51.843686  201585 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 07:14:51.860796  201585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:14:51.870528  201585 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec  5 07:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  5 07:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  5 07:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  5 07:14 /etc/kubernetes/scheduler.conf
	
	I1205 07:14:51.870666  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:14:51.881150  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:14:51.891270  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:14:51.900905  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:14:51.901011  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:14:51.909840  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:14:51.919407  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:14:51.919522  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:14:51.928165  201585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:14:51.938031  201585 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:14:51.994862  201585 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:14:53.435083  201585 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.440139945s)
	I1205 07:14:53.435195  201585 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:14:53.674744  201585 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:14:53.742313  201585 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:14:53.795602  201585 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:14:53.795744  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:54.295853  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:54.796698  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:55.296820  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:55.795908  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:56.295897  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:56.796535  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:57.296888  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:57.796224  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:58.296548  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:58.796287  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:59.296588  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:14:59.795884  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:00.296155  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:00.796761  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:01.295895  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:01.796142  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:02.295870  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:02.796875  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:03.296858  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:03.796609  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:04.296582  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:04.796256  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:05.295863  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:05.796759  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:06.295845  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:06.796778  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:07.296385  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:07.796563  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:08.295882  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:08.796432  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:09.295862  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:09.796403  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:10.295925  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:10.795847  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:11.296556  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:11.796756  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:12.296091  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:12.796383  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:13.296449  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:13.796776  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:14.296508  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:14.795866  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:15.296567  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:15.795851  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:16.296580  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:16.795851  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:17.296801  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:17.796488  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:18.295949  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:18.795898  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:19.296893  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:19.796242  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:20.296482  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:20.795905  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:21.295860  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:21.796209  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:22.296690  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:22.796631  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:23.295809  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:23.795791  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:24.296339  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:24.796121  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:25.296436  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:25.796645  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:26.295874  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:26.796210  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:27.296362  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:27.795928  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:28.296786  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:28.795840  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:29.295858  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:29.795877  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:30.298520  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:30.796499  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:31.296257  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:31.796291  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:32.296513  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:32.795905  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:33.296032  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:33.796302  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:34.296585  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:34.796133  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:35.296579  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:35.796602  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:36.295997  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:36.796132  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:37.296540  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:37.795867  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:38.296737  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:38.795901  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:39.296567  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:39.795843  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:40.296647  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:40.796400  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:41.295867  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:41.796551  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:42.296471  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:42.795881  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:43.295937  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:43.796102  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:44.296562  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:44.796638  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:45.296836  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:45.795813  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:46.295790  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:46.796216  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:47.295843  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:47.795955  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:48.295853  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:48.796502  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:49.296484  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:49.795876  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:50.296755  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:50.795856  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:51.295872  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:51.796472  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:52.296724  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:52.795816  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:53.295944  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:53.795814  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:15:53.795908  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:15:53.832486  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:15:53.832506  201585 cri.go:89] found id: ""
	I1205 07:15:53.832514  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:15:53.832573  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:15:53.837152  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:15:53.837248  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:15:53.876300  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:15:53.876318  201585 cri.go:89] found id: ""
	I1205 07:15:53.876327  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:15:53.876382  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:15:53.881702  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:15:53.881782  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:15:53.914425  201585 cri.go:89] found id: ""
	I1205 07:15:53.914448  201585 logs.go:282] 0 containers: []
	W1205 07:15:53.914460  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:15:53.914477  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:15:53.914552  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:15:53.953530  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:15:53.953553  201585 cri.go:89] found id: ""
	I1205 07:15:53.953561  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:15:53.953616  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:15:53.961910  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:15:53.962034  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:15:54.001663  201585 cri.go:89] found id: ""
	I1205 07:15:54.001742  201585 logs.go:282] 0 containers: []
	W1205 07:15:54.001766  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:15:54.001785  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:15:54.001904  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:15:54.054584  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:15:54.054644  201585 cri.go:89] found id: ""
	I1205 07:15:54.054676  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:15:54.054761  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:15:54.059977  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:15:54.060105  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:15:54.095530  201585 cri.go:89] found id: ""
	I1205 07:15:54.095610  201585 logs.go:282] 0 containers: []
	W1205 07:15:54.095633  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:15:54.095651  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:15:54.095760  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:15:54.125375  201585 cri.go:89] found id: ""
	I1205 07:15:54.125449  201585 logs.go:282] 0 containers: []
	W1205 07:15:54.125470  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:15:54.125519  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:15:54.125546  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:15:54.143214  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:15:54.143285  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:15:54.263123  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:15:54.263191  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:15:54.263233  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:15:54.318978  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:15:54.319053  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:15:54.362150  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:15:54.362232  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:15:54.406555  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:15:54.406591  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:15:54.444290  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:15:54.444322  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:15:54.482347  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:15:54.482385  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:15:54.513007  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:15:54.513035  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:15:57.075254  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:15:57.089299  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:15:57.089364  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:15:57.182040  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:15:57.182064  201585 cri.go:89] found id: ""
	I1205 07:15:57.182072  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:15:57.182139  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:15:57.198453  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:15:57.198540  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:15:57.261448  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:15:57.261473  201585 cri.go:89] found id: ""
	I1205 07:15:57.261481  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:15:57.261538  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:15:57.265972  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:15:57.266077  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:15:57.307369  201585 cri.go:89] found id: ""
	I1205 07:15:57.307395  201585 logs.go:282] 0 containers: []
	W1205 07:15:57.307411  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:15:57.307433  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:15:57.307520  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:15:57.361649  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:15:57.361672  201585 cri.go:89] found id: ""
	I1205 07:15:57.361687  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:15:57.361784  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:15:57.366923  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:15:57.367029  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:15:57.419841  201585 cri.go:89] found id: ""
	I1205 07:15:57.419867  201585 logs.go:282] 0 containers: []
	W1205 07:15:57.419884  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:15:57.419916  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:15:57.420001  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:15:57.465776  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:15:57.465799  201585 cri.go:89] found id: ""
	I1205 07:15:57.465808  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:15:57.465869  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:15:57.470108  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:15:57.470188  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:15:57.527235  201585 cri.go:89] found id: ""
	I1205 07:15:57.527270  201585 logs.go:282] 0 containers: []
	W1205 07:15:57.527279  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:15:57.527288  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:15:57.527358  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:15:57.566235  201585 cri.go:89] found id: ""
	I1205 07:15:57.566262  201585 logs.go:282] 0 containers: []
	W1205 07:15:57.566286  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:15:57.566302  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:15:57.566318  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:15:57.628684  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:15:57.628712  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:15:57.646331  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:15:57.646364  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:15:57.761499  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:15:57.761527  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:15:57.761543  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:15:57.818878  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:15:57.818911  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:15:57.911882  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:15:57.911917  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:15:57.963520  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:15:57.963555  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:15:58.009982  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:15:58.010021  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:15:58.124512  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:15:58.124588  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:00.698789  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:00.709948  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:00.710019  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:00.743036  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:00.743056  201585 cri.go:89] found id: ""
	I1205 07:16:00.743064  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:00.743119  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:00.748609  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:00.748677  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:00.779412  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:00.779430  201585 cri.go:89] found id: ""
	I1205 07:16:00.779439  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:00.779495  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:00.784501  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:00.784574  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:00.812578  201585 cri.go:89] found id: ""
	I1205 07:16:00.812600  201585 logs.go:282] 0 containers: []
	W1205 07:16:00.812608  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:00.812615  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:00.812676  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:00.846688  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:00.846706  201585 cri.go:89] found id: ""
	I1205 07:16:00.846714  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:00.846773  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:00.851662  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:00.851781  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:00.881349  201585 cri.go:89] found id: ""
	I1205 07:16:00.881414  201585 logs.go:282] 0 containers: []
	W1205 07:16:00.881436  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:00.881452  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:00.881552  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:00.915075  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:00.915146  201585 cri.go:89] found id: ""
	I1205 07:16:00.915166  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:00.915251  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:00.920121  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:00.920233  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:00.972027  201585 cri.go:89] found id: ""
	I1205 07:16:00.972102  201585 logs.go:282] 0 containers: []
	W1205 07:16:00.972127  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:00.972146  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:00.972239  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:01.031467  201585 cri.go:89] found id: ""
	I1205 07:16:01.031536  201585 logs.go:282] 0 containers: []
	W1205 07:16:01.031560  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:01.031585  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:01.031623  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:01.215138  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:01.215211  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:01.215240  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:01.252988  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:01.253017  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:01.292171  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:01.292195  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:01.362878  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:01.362958  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:01.414670  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:01.414753  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:01.457341  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:01.457459  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:01.498076  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:01.498153  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:01.533680  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:01.533711  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:04.060779  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:04.075035  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:04.075117  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:04.125077  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:04.125099  201585 cri.go:89] found id: ""
	I1205 07:16:04.125108  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:04.125176  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:04.134768  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:04.134848  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:04.202297  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:04.202315  201585 cri.go:89] found id: ""
	I1205 07:16:04.202323  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:04.202376  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:04.206964  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:04.207034  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:04.276495  201585 cri.go:89] found id: ""
	I1205 07:16:04.276517  201585 logs.go:282] 0 containers: []
	W1205 07:16:04.276526  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:04.276532  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:04.276597  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:04.312262  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:04.312285  201585 cri.go:89] found id: ""
	I1205 07:16:04.312293  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:04.312357  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:04.322046  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:04.322149  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:04.381284  201585 cri.go:89] found id: ""
	I1205 07:16:04.381308  201585 logs.go:282] 0 containers: []
	W1205 07:16:04.381316  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:04.381323  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:04.381382  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:04.465855  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:04.465885  201585 cri.go:89] found id: ""
	I1205 07:16:04.465894  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:04.465948  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:04.470887  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:04.470965  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:04.507636  201585 cri.go:89] found id: ""
	I1205 07:16:04.507663  201585 logs.go:282] 0 containers: []
	W1205 07:16:04.507672  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:04.507678  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:04.507736  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:04.538666  201585 cri.go:89] found id: ""
	I1205 07:16:04.538691  201585 logs.go:282] 0 containers: []
	W1205 07:16:04.538700  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:04.538714  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:04.538726  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:04.599501  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:04.599537  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:04.656908  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:04.656936  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:04.737038  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:04.737067  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:04.811111  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:04.811148  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:04.879619  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:04.879700  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:04.925883  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:04.925960  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:05.016078  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:05.016162  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:05.042122  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:05.042200  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:05.215631  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:07.717323  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:07.735128  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:07.735196  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:07.790192  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:07.790209  201585 cri.go:89] found id: ""
	I1205 07:16:07.790217  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:07.790268  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:07.797008  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:07.797075  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:07.844287  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:07.844307  201585 cri.go:89] found id: ""
	I1205 07:16:07.844315  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:07.844370  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:07.850496  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:07.850568  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:07.897331  201585 cri.go:89] found id: ""
	I1205 07:16:07.897352  201585 logs.go:282] 0 containers: []
	W1205 07:16:07.897360  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:07.897372  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:07.897428  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:07.931729  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:07.931752  201585 cri.go:89] found id: ""
	I1205 07:16:07.931761  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:07.931815  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:07.936827  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:07.936894  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:07.968208  201585 cri.go:89] found id: ""
	I1205 07:16:07.968234  201585 logs.go:282] 0 containers: []
	W1205 07:16:07.968244  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:07.968255  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:07.968318  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:08.004513  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:08.004538  201585 cri.go:89] found id: ""
	I1205 07:16:08.004547  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:08.004618  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:08.012525  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:08.012608  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:08.083908  201585 cri.go:89] found id: ""
	I1205 07:16:08.083930  201585 logs.go:282] 0 containers: []
	W1205 07:16:08.083939  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:08.083945  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:08.084007  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:08.153758  201585 cri.go:89] found id: ""
	I1205 07:16:08.153782  201585 logs.go:282] 0 containers: []
	W1205 07:16:08.153791  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:08.153804  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:08.153816  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:08.250540  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:08.250574  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:08.287681  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:08.287715  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:08.380576  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:08.380601  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:08.380614  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:08.442336  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:08.442372  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:08.499690  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:08.499798  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:08.562196  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:08.562275  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:08.653550  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:08.653631  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:08.670271  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:08.670298  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:11.213312  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:11.224120  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:11.224193  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:11.257746  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:11.257771  201585 cri.go:89] found id: ""
	I1205 07:16:11.257779  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:11.257838  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:11.262316  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:11.262415  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:11.295614  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:11.295637  201585 cri.go:89] found id: ""
	I1205 07:16:11.295644  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:11.295698  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:11.305718  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:11.305794  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:11.345631  201585 cri.go:89] found id: ""
	I1205 07:16:11.345659  201585 logs.go:282] 0 containers: []
	W1205 07:16:11.345668  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:11.345674  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:11.345730  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:11.373017  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:11.373041  201585 cri.go:89] found id: ""
	I1205 07:16:11.373059  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:11.373112  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:11.377591  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:11.377707  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:11.425240  201585 cri.go:89] found id: ""
	I1205 07:16:11.425262  201585 logs.go:282] 0 containers: []
	W1205 07:16:11.425270  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:11.425278  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:11.425333  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:11.476465  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:11.476485  201585 cri.go:89] found id: ""
	I1205 07:16:11.476495  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:11.476551  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:11.482444  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:11.482515  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:11.515363  201585 cri.go:89] found id: ""
	I1205 07:16:11.515387  201585 logs.go:282] 0 containers: []
	W1205 07:16:11.515396  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:11.515402  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:11.515458  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:11.541262  201585 cri.go:89] found id: ""
	I1205 07:16:11.541284  201585 logs.go:282] 0 containers: []
	W1205 07:16:11.541292  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:11.541307  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:11.541318  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:11.580134  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:11.580210  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:11.650174  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:11.650207  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:11.666255  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:11.666281  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:11.758031  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:11.758051  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:11.758067  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:11.795475  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:11.795534  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:11.854618  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:11.854645  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:11.902364  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:11.902434  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:11.948845  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:11.948924  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:14.494372  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:14.507697  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:14.507788  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:14.545831  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:14.545850  201585 cri.go:89] found id: ""
	I1205 07:16:14.545858  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:14.545913  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:14.560705  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:14.560825  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:14.607778  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:14.607803  201585 cri.go:89] found id: ""
	I1205 07:16:14.607811  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:14.607873  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:14.614826  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:14.614899  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:14.649198  201585 cri.go:89] found id: ""
	I1205 07:16:14.649225  201585 logs.go:282] 0 containers: []
	W1205 07:16:14.649234  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:14.649240  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:14.649300  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:14.688448  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:14.688483  201585 cri.go:89] found id: ""
	I1205 07:16:14.688493  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:14.688558  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:14.694593  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:14.694663  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:14.752033  201585 cri.go:89] found id: ""
	I1205 07:16:14.752061  201585 logs.go:282] 0 containers: []
	W1205 07:16:14.752071  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:14.752077  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:14.752148  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:14.784561  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:14.784591  201585 cri.go:89] found id: ""
	I1205 07:16:14.784601  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:14.784653  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:14.789654  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:14.789734  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:14.831957  201585 cri.go:89] found id: ""
	I1205 07:16:14.831993  201585 logs.go:282] 0 containers: []
	W1205 07:16:14.832002  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:14.832009  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:14.832075  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:14.898898  201585 cri.go:89] found id: ""
	I1205 07:16:14.898935  201585 logs.go:282] 0 containers: []
	W1205 07:16:14.898944  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:14.898957  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:14.898969  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:14.959152  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:14.959187  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:15.014751  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:15.014799  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:15.094674  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:15.094705  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:15.130130  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:15.130166  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:15.221555  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:15.221670  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:15.221696  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:15.260025  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:15.260190  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:15.296957  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:15.296993  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:15.361131  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:15.361173  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:17.877312  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:17.888978  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:17.889047  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:17.915736  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:17.915756  201585 cri.go:89] found id: ""
	I1205 07:16:17.915764  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:17.915820  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:17.920178  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:17.920266  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:17.945423  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:17.945441  201585 cri.go:89] found id: ""
	I1205 07:16:17.945450  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:17.945501  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:17.949790  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:17.949860  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:17.975359  201585 cri.go:89] found id: ""
	I1205 07:16:17.975380  201585 logs.go:282] 0 containers: []
	W1205 07:16:17.975389  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:17.975401  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:17.975457  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:18.000683  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:18.000705  201585 cri.go:89] found id: ""
	I1205 07:16:18.000714  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:18.000774  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:18.006821  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:18.006908  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:18.036282  201585 cri.go:89] found id: ""
	I1205 07:16:18.036304  201585 logs.go:282] 0 containers: []
	W1205 07:16:18.036313  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:18.036320  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:18.036379  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:18.067648  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:18.067668  201585 cri.go:89] found id: ""
	I1205 07:16:18.067676  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:18.067742  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:18.072424  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:18.072493  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:18.101283  201585 cri.go:89] found id: ""
	I1205 07:16:18.101343  201585 logs.go:282] 0 containers: []
	W1205 07:16:18.101364  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:18.101386  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:18.101459  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:18.140497  201585 cri.go:89] found id: ""
	I1205 07:16:18.140569  201585 logs.go:282] 0 containers: []
	W1205 07:16:18.140606  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:18.140649  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:18.140684  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:18.242994  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:18.243074  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:18.264546  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:18.264578  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:18.413820  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:18.413843  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:18.413856  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:18.466357  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:18.466394  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:18.554224  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:18.554265  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:18.605743  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:18.605778  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:18.655395  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:18.655433  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:18.699089  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:18.699118  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:21.242499  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:21.257177  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:21.257245  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:21.303138  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:21.303161  201585 cri.go:89] found id: ""
	I1205 07:16:21.303169  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:21.303220  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:21.307918  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:21.307985  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:21.335048  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:21.335071  201585 cri.go:89] found id: ""
	I1205 07:16:21.335079  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:21.335134  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:21.339636  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:21.339715  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:21.366349  201585 cri.go:89] found id: ""
	I1205 07:16:21.366374  201585 logs.go:282] 0 containers: []
	W1205 07:16:21.366382  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:21.366388  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:21.366443  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:21.402103  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:21.402126  201585 cri.go:89] found id: ""
	I1205 07:16:21.402135  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:21.402186  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:21.406642  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:21.406710  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:21.432898  201585 cri.go:89] found id: ""
	I1205 07:16:21.432922  201585 logs.go:282] 0 containers: []
	W1205 07:16:21.432932  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:21.432938  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:21.432996  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:21.459242  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:21.459265  201585 cri.go:89] found id: ""
	I1205 07:16:21.459273  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:21.459328  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:21.464156  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:21.464229  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:21.491250  201585 cri.go:89] found id: ""
	I1205 07:16:21.491273  201585 logs.go:282] 0 containers: []
	W1205 07:16:21.491283  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:21.491289  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:21.491347  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:21.527599  201585 cri.go:89] found id: ""
	I1205 07:16:21.527624  201585 logs.go:282] 0 containers: []
	W1205 07:16:21.527633  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:21.527648  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:21.527659  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:21.575229  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:21.575256  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:21.613143  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:21.613230  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:21.656831  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:21.656860  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:21.725777  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:21.725809  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:21.742715  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:21.742740  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:21.866270  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:21.866288  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:21.866300  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:21.972156  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:21.972232  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:22.035493  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:22.035529  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:24.574481  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:24.586129  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:24.586203  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:24.616444  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:24.616465  201585 cri.go:89] found id: ""
	I1205 07:16:24.616473  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:24.616528  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:24.620940  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:24.621010  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:24.647397  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:24.647418  201585 cri.go:89] found id: ""
	I1205 07:16:24.647428  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:24.647502  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:24.652190  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:24.652263  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:24.678932  201585 cri.go:89] found id: ""
	I1205 07:16:24.678954  201585 logs.go:282] 0 containers: []
	W1205 07:16:24.678963  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:24.678968  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:24.679033  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:24.704918  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:24.704941  201585 cri.go:89] found id: ""
	I1205 07:16:24.704949  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:24.705002  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:24.709453  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:24.709522  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:24.733433  201585 cri.go:89] found id: ""
	I1205 07:16:24.733460  201585 logs.go:282] 0 containers: []
	W1205 07:16:24.733468  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:24.733474  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:24.733552  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:24.759711  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:24.759739  201585 cri.go:89] found id: ""
	I1205 07:16:24.759748  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:24.759805  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:24.764144  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:24.764212  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:24.792798  201585 cri.go:89] found id: ""
	I1205 07:16:24.792861  201585 logs.go:282] 0 containers: []
	W1205 07:16:24.792883  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:24.792900  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:24.792989  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:24.822219  201585 cri.go:89] found id: ""
	I1205 07:16:24.822284  201585 logs.go:282] 0 containers: []
	W1205 07:16:24.822309  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:24.822329  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:24.822356  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:24.862115  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:24.862191  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:24.898965  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:24.899035  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:24.966505  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:24.966524  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:24.966536  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:25.001825  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:25.001902  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:25.043910  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:25.043942  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:25.080096  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:25.080127  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:25.113074  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:25.113108  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:25.173684  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:25.173721  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:27.688773  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:27.701872  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:27.701950  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:27.726715  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:27.726737  201585 cri.go:89] found id: ""
	I1205 07:16:27.726745  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:27.726819  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:27.731159  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:27.731233  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:27.759475  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:27.759537  201585 cri.go:89] found id: ""
	I1205 07:16:27.759558  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:27.759643  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:27.764014  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:27.764078  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:27.788964  201585 cri.go:89] found id: ""
	I1205 07:16:27.788985  201585 logs.go:282] 0 containers: []
	W1205 07:16:27.788993  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:27.788999  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:27.789051  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:27.815825  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:27.815845  201585 cri.go:89] found id: ""
	I1205 07:16:27.815854  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:27.815909  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:27.820548  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:27.820668  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:27.855713  201585 cri.go:89] found id: ""
	I1205 07:16:27.855737  201585 logs.go:282] 0 containers: []
	W1205 07:16:27.855745  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:27.855751  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:27.855808  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:27.888280  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:27.888300  201585 cri.go:89] found id: ""
	I1205 07:16:27.888307  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:27.888363  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:27.892918  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:27.893029  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:27.922324  201585 cri.go:89] found id: ""
	I1205 07:16:27.922401  201585 logs.go:282] 0 containers: []
	W1205 07:16:27.922427  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:27.922445  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:27.922550  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:27.948859  201585 cri.go:89] found id: ""
	I1205 07:16:27.948932  201585 logs.go:282] 0 containers: []
	W1205 07:16:27.948953  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:27.948979  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:27.949022  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:27.984726  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:27.984761  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:28.019834  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:28.019864  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:28.053655  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:28.053684  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:28.090096  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:28.090133  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:28.134891  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:28.134923  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:28.167711  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:28.167744  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:28.229510  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:28.229550  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:28.243396  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:28.243431  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:28.306821  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:30.807952  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:30.818741  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:30.818836  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:30.846293  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:30.846316  201585 cri.go:89] found id: ""
	I1205 07:16:30.846325  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:30.846400  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:30.851436  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:30.851528  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:30.893208  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:30.893277  201585 cri.go:89] found id: ""
	I1205 07:16:30.893299  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:30.893382  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:30.898451  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:30.898571  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:30.931164  201585 cri.go:89] found id: ""
	I1205 07:16:30.931189  201585 logs.go:282] 0 containers: []
	W1205 07:16:30.931198  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:30.931204  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:30.931282  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:30.956709  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:30.956792  201585 cri.go:89] found id: ""
	I1205 07:16:30.956814  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:30.956900  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:30.961275  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:30.961360  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:30.987165  201585 cri.go:89] found id: ""
	I1205 07:16:30.987236  201585 logs.go:282] 0 containers: []
	W1205 07:16:30.987249  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:30.987256  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:30.987333  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:31.015138  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:31.015209  201585 cri.go:89] found id: ""
	I1205 07:16:31.015231  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:31.015301  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:31.020191  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:31.020264  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:31.047795  201585 cri.go:89] found id: ""
	I1205 07:16:31.047872  201585 logs.go:282] 0 containers: []
	W1205 07:16:31.047896  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:31.047914  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:31.048005  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:31.074276  201585 cri.go:89] found id: ""
	I1205 07:16:31.074301  201585 logs.go:282] 0 containers: []
	W1205 07:16:31.074311  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:31.074326  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:31.074372  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:31.132851  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:31.132885  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:31.204216  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:31.204280  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:31.204301  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:31.238489  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:31.238519  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:31.271210  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:31.271240  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:31.305378  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:31.305409  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:31.318600  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:31.318629  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:31.364337  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:31.364367  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:31.400981  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:31.401009  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:33.933385  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:33.944421  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:33.944497  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:33.970193  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:33.970226  201585 cri.go:89] found id: ""
	I1205 07:16:33.970235  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:33.970302  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:33.974873  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:33.974947  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:34.002801  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:34.002823  201585 cri.go:89] found id: ""
	I1205 07:16:34.002832  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:34.002907  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:34.011185  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:34.011265  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:34.038952  201585 cri.go:89] found id: ""
	I1205 07:16:34.038978  201585 logs.go:282] 0 containers: []
	W1205 07:16:34.038987  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:34.039021  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:34.039106  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:34.072427  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:34.072450  201585 cri.go:89] found id: ""
	I1205 07:16:34.072459  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:34.072516  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:34.077245  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:34.077328  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:34.103932  201585 cri.go:89] found id: ""
	I1205 07:16:34.104006  201585 logs.go:282] 0 containers: []
	W1205 07:16:34.104029  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:34.104047  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:34.104134  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:34.131883  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:34.131903  201585 cri.go:89] found id: ""
	I1205 07:16:34.131911  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:34.131965  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:34.136532  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:34.136598  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:34.161383  201585 cri.go:89] found id: ""
	I1205 07:16:34.161410  201585 logs.go:282] 0 containers: []
	W1205 07:16:34.161419  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:34.161425  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:34.161485  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:34.187929  201585 cri.go:89] found id: ""
	I1205 07:16:34.187955  201585 logs.go:282] 0 containers: []
	W1205 07:16:34.187964  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:34.187977  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:34.187989  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:34.246385  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:34.246419  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:34.283515  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:34.283547  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:34.320726  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:34.320760  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:34.352435  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:34.352463  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:34.386967  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:34.386999  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:34.400721  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:34.400747  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:34.477285  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:34.477304  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:34.477317  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:34.510457  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:34.510489  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:37.041304  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:37.052307  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:37.052380  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:37.082979  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:37.083003  201585 cri.go:89] found id: ""
	I1205 07:16:37.083013  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:37.083080  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:37.087920  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:37.087996  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:37.115053  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:37.115075  201585 cri.go:89] found id: ""
	I1205 07:16:37.115084  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:37.115140  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:37.119675  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:37.119746  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:37.145464  201585 cri.go:89] found id: ""
	I1205 07:16:37.145487  201585 logs.go:282] 0 containers: []
	W1205 07:16:37.145495  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:37.145502  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:37.145559  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:37.171605  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:37.171627  201585 cri.go:89] found id: ""
	I1205 07:16:37.171636  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:37.171690  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:37.176033  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:37.176109  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:37.202505  201585 cri.go:89] found id: ""
	I1205 07:16:37.202530  201585 logs.go:282] 0 containers: []
	W1205 07:16:37.202539  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:37.202545  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:37.202606  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:37.231721  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:37.231742  201585 cri.go:89] found id: ""
	I1205 07:16:37.231751  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:37.231805  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:37.236307  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:37.236372  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:37.262400  201585 cri.go:89] found id: ""
	I1205 07:16:37.262424  201585 logs.go:282] 0 containers: []
	W1205 07:16:37.262432  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:37.262438  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:37.262496  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:37.287971  201585 cri.go:89] found id: ""
	I1205 07:16:37.287994  201585 logs.go:282] 0 containers: []
	W1205 07:16:37.288003  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:37.288016  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:37.288027  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:37.354224  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:37.354246  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:37.354258  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:37.389394  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:37.389425  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:37.426052  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:37.426085  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:37.460283  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:37.460316  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:37.522110  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:37.522143  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:37.535477  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:37.535512  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:37.572161  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:37.572193  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:37.608011  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:37.608041  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:40.171003  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:40.182176  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:40.182254  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:40.211525  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:40.211548  201585 cri.go:89] found id: ""
	I1205 07:16:40.211556  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:40.211609  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:40.215971  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:40.216043  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:40.240813  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:40.240883  201585 cri.go:89] found id: ""
	I1205 07:16:40.240904  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:40.240984  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:40.245404  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:40.245474  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:40.270555  201585 cri.go:89] found id: ""
	I1205 07:16:40.270577  201585 logs.go:282] 0 containers: []
	W1205 07:16:40.270586  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:40.270592  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:40.270654  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:40.303423  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:40.303495  201585 cri.go:89] found id: ""
	I1205 07:16:40.303516  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:40.303600  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:40.307947  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:40.308017  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:40.333492  201585 cri.go:89] found id: ""
	I1205 07:16:40.333513  201585 logs.go:282] 0 containers: []
	W1205 07:16:40.333521  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:40.333527  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:40.333586  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:40.358998  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:40.359074  201585 cri.go:89] found id: ""
	I1205 07:16:40.359089  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:40.359150  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:40.363544  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:40.363659  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:40.389556  201585 cri.go:89] found id: ""
	I1205 07:16:40.389578  201585 logs.go:282] 0 containers: []
	W1205 07:16:40.389586  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:40.389592  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:40.389650  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:40.414978  201585 cri.go:89] found id: ""
	I1205 07:16:40.415000  201585 logs.go:282] 0 containers: []
	W1205 07:16:40.415009  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:40.415024  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:40.415036  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:40.448855  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:40.448884  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:40.487857  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:40.487888  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:40.523079  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:40.523109  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:40.570635  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:40.570661  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:40.637245  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:40.637284  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:40.650744  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:40.650771  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:40.723920  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:40.723939  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:40.723959  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:40.759547  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:40.759577  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:43.295098  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:43.308063  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:43.308135  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:43.336227  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:43.336299  201585 cri.go:89] found id: ""
	I1205 07:16:43.336321  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:43.336403  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:43.341028  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:43.341110  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:43.366629  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:43.366653  201585 cri.go:89] found id: ""
	I1205 07:16:43.366661  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:43.366739  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:43.371365  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:43.371433  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:43.399740  201585 cri.go:89] found id: ""
	I1205 07:16:43.399769  201585 logs.go:282] 0 containers: []
	W1205 07:16:43.399779  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:43.399786  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:43.399841  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:43.427466  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:43.427486  201585 cri.go:89] found id: ""
	I1205 07:16:43.427494  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:43.427550  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:43.432378  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:43.432454  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:43.463449  201585 cri.go:89] found id: ""
	I1205 07:16:43.463473  201585 logs.go:282] 0 containers: []
	W1205 07:16:43.463482  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:43.463489  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:43.463550  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:43.490551  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:43.490573  201585 cri.go:89] found id: ""
	I1205 07:16:43.490581  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:43.490638  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:43.495243  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:43.495319  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:43.522427  201585 cri.go:89] found id: ""
	I1205 07:16:43.522451  201585 logs.go:282] 0 containers: []
	W1205 07:16:43.522460  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:43.522466  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:43.522525  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:43.548679  201585 cri.go:89] found id: ""
	I1205 07:16:43.548703  201585 logs.go:282] 0 containers: []
	W1205 07:16:43.548712  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:43.548731  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:43.548744  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:43.626515  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:43.626539  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:43.626552  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:43.672554  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:43.672589  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:43.706224  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:43.706329  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:43.740945  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:43.741021  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:43.801767  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:43.801801  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:43.816128  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:43.816157  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:43.864167  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:43.864198  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:43.900939  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:43.900968  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:46.437181  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:46.448525  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:46.448603  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:46.485733  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:46.485755  201585 cri.go:89] found id: ""
	I1205 07:16:46.485763  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:46.485818  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:46.490485  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:46.490555  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:46.516398  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:46.516418  201585 cri.go:89] found id: ""
	I1205 07:16:46.516426  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:46.516479  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:46.521436  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:46.521561  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:46.551227  201585 cri.go:89] found id: ""
	I1205 07:16:46.551300  201585 logs.go:282] 0 containers: []
	W1205 07:16:46.551324  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:46.551343  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:46.551432  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:46.578630  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:46.578651  201585 cri.go:89] found id: ""
	I1205 07:16:46.578659  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:46.578719  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:46.583217  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:46.583288  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:46.621783  201585 cri.go:89] found id: ""
	I1205 07:16:46.621808  201585 logs.go:282] 0 containers: []
	W1205 07:16:46.621824  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:46.621830  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:46.621889  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:46.651226  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:46.651248  201585 cri.go:89] found id: ""
	I1205 07:16:46.651257  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:46.651314  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:46.656259  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:46.656338  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:46.684659  201585 cri.go:89] found id: ""
	I1205 07:16:46.684682  201585 logs.go:282] 0 containers: []
	W1205 07:16:46.684691  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:46.684698  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:46.684755  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:46.711994  201585 cri.go:89] found id: ""
	I1205 07:16:46.712018  201585 logs.go:282] 0 containers: []
	W1205 07:16:46.712026  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:46.712040  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:46.712051  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:46.770905  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:46.770936  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:46.834854  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:46.834876  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:46.834890  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:46.881475  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:46.881506  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:46.914476  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:46.914508  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:46.951854  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:46.951885  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:46.981713  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:46.981742  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:47.014998  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:47.015033  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:47.048316  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:47.048397  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:49.562945  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:49.573762  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:49.573835  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:49.607574  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:49.607597  201585 cri.go:89] found id: ""
	I1205 07:16:49.607611  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:49.607678  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:49.616717  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:49.616799  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:49.646687  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:49.646718  201585 cri.go:89] found id: ""
	I1205 07:16:49.646727  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:49.646781  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:49.651536  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:49.651604  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:49.679662  201585 cri.go:89] found id: ""
	I1205 07:16:49.679690  201585 logs.go:282] 0 containers: []
	W1205 07:16:49.679700  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:49.679706  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:49.679765  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:49.709884  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:49.709907  201585 cri.go:89] found id: ""
	I1205 07:16:49.709915  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:49.709968  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:49.714390  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:49.714460  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:49.740542  201585 cri.go:89] found id: ""
	I1205 07:16:49.740563  201585 logs.go:282] 0 containers: []
	W1205 07:16:49.740572  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:49.740579  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:49.740634  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:49.766293  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:49.766316  201585 cri.go:89] found id: ""
	I1205 07:16:49.766324  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:49.766378  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:49.770798  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:49.770888  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:49.796733  201585 cri.go:89] found id: ""
	I1205 07:16:49.796755  201585 logs.go:282] 0 containers: []
	W1205 07:16:49.796764  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:49.796770  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:49.796831  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:49.820885  201585 cri.go:89] found id: ""
	I1205 07:16:49.820911  201585 logs.go:282] 0 containers: []
	W1205 07:16:49.820920  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:49.820936  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:49.820948  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:49.854588  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:49.854617  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:49.902336  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:49.902366  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:49.957332  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:49.957366  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:50.010917  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:50.010962  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:50.057402  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:50.057434  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:50.126871  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:50.126973  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:50.141085  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:50.141108  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:50.224544  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:50.224562  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:50.224574  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:52.757973  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:52.768754  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:52.768828  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:52.795722  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:52.795743  201585 cri.go:89] found id: ""
	I1205 07:16:52.795751  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:52.795805  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:52.800153  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:52.800252  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:52.825871  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:52.825905  201585 cri.go:89] found id: ""
	I1205 07:16:52.825913  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:52.825972  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:52.830514  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:52.830584  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:52.857446  201585 cri.go:89] found id: ""
	I1205 07:16:52.857468  201585 logs.go:282] 0 containers: []
	W1205 07:16:52.857477  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:52.857483  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:52.857542  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:52.885047  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:52.885066  201585 cri.go:89] found id: ""
	I1205 07:16:52.885074  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:52.885129  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:52.889688  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:52.889757  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:52.919631  201585 cri.go:89] found id: ""
	I1205 07:16:52.919654  201585 logs.go:282] 0 containers: []
	W1205 07:16:52.919663  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:52.919670  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:52.919729  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:52.946832  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:52.946853  201585 cri.go:89] found id: ""
	I1205 07:16:52.946860  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:52.946940  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:52.951502  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:52.951608  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:52.980612  201585 cri.go:89] found id: ""
	I1205 07:16:52.980635  201585 logs.go:282] 0 containers: []
	W1205 07:16:52.980643  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:52.980650  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:52.980705  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:53.009351  201585 cri.go:89] found id: ""
	I1205 07:16:53.009387  201585 logs.go:282] 0 containers: []
	W1205 07:16:53.009397  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:53.009417  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:53.009428  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:53.024006  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:53.024034  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:53.088248  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:53.088269  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:53.088282  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:53.135977  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:53.136005  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:53.171765  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:53.171796  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:53.201366  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:53.201396  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:53.229354  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:53.229380  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:53.289166  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:53.289200  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:53.320650  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:53.320679  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:55.855988  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:55.866778  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:55.866848  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:55.892929  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:55.892953  201585 cri.go:89] found id: ""
	I1205 07:16:55.892960  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:55.893015  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:55.897545  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:55.897618  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:55.922831  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:55.922850  201585 cri.go:89] found id: ""
	I1205 07:16:55.922858  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:55.922912  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:55.927238  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:55.927341  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:55.952627  201585 cri.go:89] found id: ""
	I1205 07:16:55.952648  201585 logs.go:282] 0 containers: []
	W1205 07:16:55.952656  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:55.952663  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:55.952721  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:55.981329  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:55.981348  201585 cri.go:89] found id: ""
	I1205 07:16:55.981356  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:55.981415  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:55.985843  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:55.985910  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:56.022239  201585 cri.go:89] found id: ""
	I1205 07:16:56.022264  201585 logs.go:282] 0 containers: []
	W1205 07:16:56.022273  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:56.022279  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:56.022341  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:56.049452  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:56.049474  201585 cri.go:89] found id: ""
	I1205 07:16:56.049482  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:56.049540  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:56.054422  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:56.054496  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:56.081639  201585 cri.go:89] found id: ""
	I1205 07:16:56.081664  201585 logs.go:282] 0 containers: []
	W1205 07:16:56.081675  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:56.081682  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:56.081744  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:56.108297  201585 cri.go:89] found id: ""
	I1205 07:16:56.108319  201585 logs.go:282] 0 containers: []
	W1205 07:16:56.108327  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:56.108341  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:56.108356  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:56.145023  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:56.145053  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:56.178731  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:56.178764  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:16:56.208447  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:56.208473  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:56.267321  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:56.267354  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:56.306169  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:56.306197  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:56.346070  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:56.346148  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:56.360583  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:56.360607  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:56.435162  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:56.435181  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:56.435196  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:58.966957  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:16:58.977827  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:16:58.977947  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:16:59.012390  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:59.012451  201585 cri.go:89] found id: ""
	I1205 07:16:59.012472  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:16:59.012554  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:59.017635  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:16:59.017700  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:16:59.047058  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:59.047077  201585 cri.go:89] found id: ""
	I1205 07:16:59.047085  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:16:59.047140  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:59.051849  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:16:59.051911  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:16:59.084073  201585 cri.go:89] found id: ""
	I1205 07:16:59.084095  201585 logs.go:282] 0 containers: []
	W1205 07:16:59.084103  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:16:59.084109  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:16:59.084164  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:16:59.110441  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:59.110502  201585 cri.go:89] found id: ""
	I1205 07:16:59.110523  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:16:59.110609  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:59.115242  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:16:59.115345  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:16:59.145712  201585 cri.go:89] found id: ""
	I1205 07:16:59.145776  201585 logs.go:282] 0 containers: []
	W1205 07:16:59.145800  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:16:59.145817  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:16:59.145903  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:16:59.174287  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:59.174356  201585 cri.go:89] found id: ""
	I1205 07:16:59.174376  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:16:59.174463  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:16:59.179294  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:16:59.179392  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:16:59.215588  201585 cri.go:89] found id: ""
	I1205 07:16:59.215661  201585 logs.go:282] 0 containers: []
	W1205 07:16:59.215684  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:16:59.215702  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:16:59.215782  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:16:59.243923  201585 cri.go:89] found id: ""
	I1205 07:16:59.243987  201585 logs.go:282] 0 containers: []
	W1205 07:16:59.244012  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:16:59.244036  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:16:59.244072  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:16:59.323313  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:16:59.323380  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:16:59.323406  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:16:59.377110  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:16:59.378918  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:16:59.442757  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:16:59.442828  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:16:59.488875  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:16:59.488908  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:16:59.522269  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:16:59.522302  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:16:59.582120  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:16:59.582153  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:16:59.595985  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:16:59.596013  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:16:59.627453  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:16:59.627481  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:02.159059  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:02.172046  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:02.172169  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:02.201771  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:02.201839  201585 cri.go:89] found id: ""
	I1205 07:17:02.201862  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:02.201958  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:02.207242  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:02.207335  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:02.241542  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:02.241569  201585 cri.go:89] found id: ""
	I1205 07:17:02.241577  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:02.241638  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:02.246897  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:02.247030  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:02.277379  201585 cri.go:89] found id: ""
	I1205 07:17:02.277407  201585 logs.go:282] 0 containers: []
	W1205 07:17:02.277417  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:02.277424  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:02.277536  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:02.308872  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:02.308894  201585 cri.go:89] found id: ""
	I1205 07:17:02.308903  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:02.308967  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:02.314144  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:02.314221  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:02.345733  201585 cri.go:89] found id: ""
	I1205 07:17:02.345779  201585 logs.go:282] 0 containers: []
	W1205 07:17:02.345790  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:02.345799  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:02.345893  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:02.381772  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:02.381805  201585 cri.go:89] found id: ""
	I1205 07:17:02.381816  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:02.381885  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:02.388076  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:02.388178  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:02.419278  201585 cri.go:89] found id: ""
	I1205 07:17:02.419305  201585 logs.go:282] 0 containers: []
	W1205 07:17:02.419325  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:02.419334  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:02.419399  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:02.450394  201585 cri.go:89] found id: ""
	I1205 07:17:02.450420  201585 logs.go:282] 0 containers: []
	W1205 07:17:02.450429  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:02.450444  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:02.450457  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:02.492194  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:02.492278  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:02.530365  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:02.530401  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:02.568509  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:02.568542  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:02.633570  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:02.633656  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:02.719729  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:02.719753  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:02.719767  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:02.760446  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:02.760489  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:02.799268  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:02.799308  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:02.846273  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:02.846309  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:05.362417  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:05.375654  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:05.375735  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:05.408249  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:05.408276  201585 cri.go:89] found id: ""
	I1205 07:17:05.408284  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:05.408346  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:05.417002  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:05.417077  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:05.448082  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:05.448114  201585 cri.go:89] found id: ""
	I1205 07:17:05.448124  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:05.448190  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:05.453488  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:05.453577  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:05.482192  201585 cri.go:89] found id: ""
	I1205 07:17:05.482217  201585 logs.go:282] 0 containers: []
	W1205 07:17:05.482228  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:05.482236  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:05.482301  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:05.516961  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:05.516987  201585 cri.go:89] found id: ""
	I1205 07:17:05.517014  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:05.517091  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:05.522410  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:05.522497  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:05.560200  201585 cri.go:89] found id: ""
	I1205 07:17:05.560229  201585 logs.go:282] 0 containers: []
	W1205 07:17:05.560238  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:05.560246  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:05.560312  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:05.590269  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:05.590295  201585 cri.go:89] found id: ""
	I1205 07:17:05.590303  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:05.590375  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:05.597504  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:05.597591  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:05.635981  201585 cri.go:89] found id: ""
	I1205 07:17:05.636023  201585 logs.go:282] 0 containers: []
	W1205 07:17:05.636033  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:05.636039  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:05.636109  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:05.673550  201585 cri.go:89] found id: ""
	I1205 07:17:05.673575  201585 logs.go:282] 0 containers: []
	W1205 07:17:05.673583  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:05.673598  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:05.673611  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:05.717689  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:05.717720  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:05.763854  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:05.763896  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:05.799230  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:05.799262  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:05.835193  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:05.835226  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:05.875199  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:05.875235  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:05.938485  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:05.938523  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:05.953574  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:05.953605  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:06.089295  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:06.089318  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:06.089338  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:08.633284  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:08.646102  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:08.646176  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:08.675890  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:08.675914  201585 cri.go:89] found id: ""
	I1205 07:17:08.675922  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:08.675981  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:08.680460  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:08.680535  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:08.712529  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:08.712554  201585 cri.go:89] found id: ""
	I1205 07:17:08.712562  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:08.712614  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:08.716864  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:08.716959  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:08.743343  201585 cri.go:89] found id: ""
	I1205 07:17:08.743411  201585 logs.go:282] 0 containers: []
	W1205 07:17:08.743434  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:08.743453  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:08.743532  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:08.769686  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:08.769711  201585 cri.go:89] found id: ""
	I1205 07:17:08.769721  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:08.769776  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:08.774309  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:08.774397  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:08.800109  201585 cri.go:89] found id: ""
	I1205 07:17:08.800130  201585 logs.go:282] 0 containers: []
	W1205 07:17:08.800139  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:08.800145  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:08.800202  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:08.831166  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:08.831227  201585 cri.go:89] found id: ""
	I1205 07:17:08.831257  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:08.831344  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:08.835731  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:08.835800  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:08.863756  201585 cri.go:89] found id: ""
	I1205 07:17:08.863783  201585 logs.go:282] 0 containers: []
	W1205 07:17:08.863798  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:08.863804  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:08.863868  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:08.888864  201585 cri.go:89] found id: ""
	I1205 07:17:08.888887  201585 logs.go:282] 0 containers: []
	W1205 07:17:08.888895  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:08.888911  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:08.888923  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:08.902621  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:08.902648  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:08.945093  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:08.945123  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:08.977489  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:08.977522  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:09.020993  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:09.021022  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:09.081930  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:09.081964  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:09.147309  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:09.147374  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:09.147399  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:09.185980  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:09.186011  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:09.221964  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:09.221991  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:11.755644  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:11.766435  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:11.766500  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:11.792697  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:11.792718  201585 cri.go:89] found id: ""
	I1205 07:17:11.792726  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:11.792786  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:11.797003  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:11.797069  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:11.826285  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:11.826308  201585 cri.go:89] found id: ""
	I1205 07:17:11.826316  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:11.826371  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:11.830823  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:11.830898  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:11.857527  201585 cri.go:89] found id: ""
	I1205 07:17:11.857561  201585 logs.go:282] 0 containers: []
	W1205 07:17:11.857570  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:11.857586  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:11.857659  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:11.887013  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:11.887034  201585 cri.go:89] found id: ""
	I1205 07:17:11.887042  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:11.887105  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:11.892027  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:11.892094  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:11.920040  201585 cri.go:89] found id: ""
	I1205 07:17:11.920067  201585 logs.go:282] 0 containers: []
	W1205 07:17:11.920076  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:11.920082  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:11.920138  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:11.947862  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:11.947884  201585 cri.go:89] found id: ""
	I1205 07:17:11.947892  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:11.947959  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:11.952549  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:11.952620  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:11.981437  201585 cri.go:89] found id: ""
	I1205 07:17:11.981462  201585 logs.go:282] 0 containers: []
	W1205 07:17:11.981470  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:11.981477  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:11.981532  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:12.015702  201585 cri.go:89] found id: ""
	I1205 07:17:12.015726  201585 logs.go:282] 0 containers: []
	W1205 07:17:12.015735  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:12.015749  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:12.015761  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:12.087527  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:12.087566  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:12.101364  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:12.101395  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:12.169231  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:12.169260  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:12.169274  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:12.208724  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:12.208754  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:12.242862  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:12.242893  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:12.272697  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:12.272731  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:12.310130  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:12.310161  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:12.340985  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:12.341013  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:14.878665  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:14.889477  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:14.889546  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:14.915614  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:14.915637  201585 cri.go:89] found id: ""
	I1205 07:17:14.915645  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:14.915705  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:14.920012  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:14.920093  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:14.945721  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:14.945742  201585 cri.go:89] found id: ""
	I1205 07:17:14.945750  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:14.945806  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:14.950129  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:14.950197  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:14.974295  201585 cri.go:89] found id: ""
	I1205 07:17:14.974322  201585 logs.go:282] 0 containers: []
	W1205 07:17:14.974331  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:14.974337  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:14.974393  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:15.000044  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:15.000070  201585 cri.go:89] found id: ""
	I1205 07:17:15.000078  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:15.000144  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:15.005815  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:15.005905  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:15.052998  201585 cri.go:89] found id: ""
	I1205 07:17:15.053027  201585 logs.go:282] 0 containers: []
	W1205 07:17:15.053036  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:15.053042  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:15.053125  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:15.084905  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:15.084925  201585 cri.go:89] found id: ""
	I1205 07:17:15.084934  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:15.085018  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:15.089598  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:15.089718  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:15.116660  201585 cri.go:89] found id: ""
	I1205 07:17:15.116684  201585 logs.go:282] 0 containers: []
	W1205 07:17:15.116693  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:15.116699  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:15.116783  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:15.144450  201585 cri.go:89] found id: ""
	I1205 07:17:15.144525  201585 logs.go:282] 0 containers: []
	W1205 07:17:15.144550  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:15.144586  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:15.144622  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:15.157892  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:15.157919  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:15.223807  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:15.223824  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:15.223836  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:15.259084  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:15.259112  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:15.295658  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:15.295685  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:15.324598  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:15.324627  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:15.392824  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:15.392860  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:15.434251  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:15.434284  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:15.468003  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:15.468037  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:18.010635  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:18.023768  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:18.023846  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:18.055679  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:18.055702  201585 cri.go:89] found id: ""
	I1205 07:17:18.055710  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:18.055771  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:18.060480  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:18.060551  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:18.087540  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:18.087561  201585 cri.go:89] found id: ""
	I1205 07:17:18.087569  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:18.087623  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:18.092173  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:18.092245  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:18.119057  201585 cri.go:89] found id: ""
	I1205 07:17:18.119081  201585 logs.go:282] 0 containers: []
	W1205 07:17:18.119090  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:18.119123  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:18.119200  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:18.146554  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:18.146579  201585 cri.go:89] found id: ""
	I1205 07:17:18.146587  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:18.146645  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:18.151341  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:18.151411  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:18.176793  201585 cri.go:89] found id: ""
	I1205 07:17:18.176816  201585 logs.go:282] 0 containers: []
	W1205 07:17:18.176825  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:18.176831  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:18.176889  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:18.202826  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:18.202847  201585 cri.go:89] found id: ""
	I1205 07:17:18.202855  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:18.202907  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:18.207355  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:18.207426  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:18.232923  201585 cri.go:89] found id: ""
	I1205 07:17:18.232947  201585 logs.go:282] 0 containers: []
	W1205 07:17:18.232955  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:18.232962  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:18.233026  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:18.258540  201585 cri.go:89] found id: ""
	I1205 07:17:18.258563  201585 logs.go:282] 0 containers: []
	W1205 07:17:18.258571  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:18.258605  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:18.258624  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:18.323715  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:18.323740  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:18.323753  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:18.368002  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:18.368031  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:18.412199  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:18.412229  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:18.450695  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:18.450723  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:18.510165  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:18.510200  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:18.525089  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:18.525117  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:18.579274  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:18.579305  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:18.615363  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:18.615396  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:21.144242  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:21.155107  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:21.155177  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:21.180309  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:21.180331  201585 cri.go:89] found id: ""
	I1205 07:17:21.180339  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:21.180397  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:21.184807  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:21.184876  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:21.218643  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:21.218669  201585 cri.go:89] found id: ""
	I1205 07:17:21.218679  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:21.218744  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:21.224097  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:21.224169  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:21.249356  201585 cri.go:89] found id: ""
	I1205 07:17:21.249384  201585 logs.go:282] 0 containers: []
	W1205 07:17:21.249393  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:21.249399  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:21.249458  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:21.276330  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:21.276357  201585 cri.go:89] found id: ""
	I1205 07:17:21.276365  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:21.276421  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:21.280603  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:21.280671  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:21.305844  201585 cri.go:89] found id: ""
	I1205 07:17:21.305869  201585 logs.go:282] 0 containers: []
	W1205 07:17:21.305877  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:21.305883  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:21.305941  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:21.331431  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:21.331454  201585 cri.go:89] found id: ""
	I1205 07:17:21.331462  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:21.331520  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:21.336014  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:21.336089  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:21.364990  201585 cri.go:89] found id: ""
	I1205 07:17:21.365016  201585 logs.go:282] 0 containers: []
	W1205 07:17:21.365026  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:21.365032  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:21.365094  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:21.395578  201585 cri.go:89] found id: ""
	I1205 07:17:21.395601  201585 logs.go:282] 0 containers: []
	W1205 07:17:21.395610  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:21.395624  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:21.395637  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:21.411604  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:21.411632  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:21.449438  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:21.449466  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:21.488831  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:21.488861  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:21.519721  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:21.519749  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:21.579396  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:21.579427  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:21.648911  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:21.648933  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:21.648946  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:21.689474  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:21.689504  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:21.724141  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:21.724176  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:24.254313  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:24.265375  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:24.265452  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:24.291172  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:24.291192  201585 cri.go:89] found id: ""
	I1205 07:17:24.291202  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:24.291259  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:24.295675  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:24.295746  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:24.320318  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:24.320340  201585 cri.go:89] found id: ""
	I1205 07:17:24.320348  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:24.320402  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:24.324880  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:24.324952  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:24.355676  201585 cri.go:89] found id: ""
	I1205 07:17:24.355705  201585 logs.go:282] 0 containers: []
	W1205 07:17:24.355713  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:24.355720  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:24.355784  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:24.384409  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:24.384431  201585 cri.go:89] found id: ""
	I1205 07:17:24.384438  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:24.384493  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:24.389400  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:24.389473  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:24.415435  201585 cri.go:89] found id: ""
	I1205 07:17:24.415460  201585 logs.go:282] 0 containers: []
	W1205 07:17:24.415468  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:24.415476  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:24.415533  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:24.447506  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:24.447527  201585 cri.go:89] found id: ""
	I1205 07:17:24.447536  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:24.447592  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:24.451953  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:24.452060  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:24.477485  201585 cri.go:89] found id: ""
	I1205 07:17:24.477507  201585 logs.go:282] 0 containers: []
	W1205 07:17:24.477515  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:24.477521  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:24.477578  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:24.501116  201585 cri.go:89] found id: ""
	I1205 07:17:24.501140  201585 logs.go:282] 0 containers: []
	W1205 07:17:24.501149  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:24.501205  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:24.501216  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:24.560777  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:24.560807  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:24.574017  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:24.574090  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:24.646265  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:24.646285  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:24.646297  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:24.695769  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:24.695798  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:24.732419  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:24.732449  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:24.767375  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:24.767411  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:24.797890  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:24.797921  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:24.837253  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:24.837285  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:27.373268  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:27.385220  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:27.385289  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:27.419053  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:27.419075  201585 cri.go:89] found id: ""
	I1205 07:17:27.419083  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:27.419136  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:27.423509  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:27.423576  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:27.454735  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:27.454757  201585 cri.go:89] found id: ""
	I1205 07:17:27.454765  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:27.454818  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:27.459396  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:27.459463  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:27.484713  201585 cri.go:89] found id: ""
	I1205 07:17:27.484738  201585 logs.go:282] 0 containers: []
	W1205 07:17:27.484746  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:27.484752  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:27.484807  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:27.514793  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:27.514815  201585 cri.go:89] found id: ""
	I1205 07:17:27.514823  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:27.514880  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:27.519358  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:27.519431  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:27.545509  201585 cri.go:89] found id: ""
	I1205 07:17:27.545533  201585 logs.go:282] 0 containers: []
	W1205 07:17:27.545542  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:27.545548  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:27.545607  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:27.578317  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:27.578338  201585 cri.go:89] found id: ""
	I1205 07:17:27.578346  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:27.578406  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:27.582957  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:27.583029  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:27.608278  201585 cri.go:89] found id: ""
	I1205 07:17:27.608303  201585 logs.go:282] 0 containers: []
	W1205 07:17:27.608312  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:27.608321  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:27.608380  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:27.632963  201585 cri.go:89] found id: ""
	I1205 07:17:27.632986  201585 logs.go:282] 0 containers: []
	W1205 07:17:27.632994  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:27.633007  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:27.633018  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:27.691620  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:27.691656  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:27.755989  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:27.756008  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:27.756021  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:27.792134  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:27.792162  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:27.837447  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:27.837476  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:27.872388  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:27.872420  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:27.902758  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:27.902786  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:27.916381  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:27.916412  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:27.952449  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:27.952478  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:30.483966  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:30.494743  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:30.494815  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:30.524002  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:30.524022  201585 cri.go:89] found id: ""
	I1205 07:17:30.524030  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:30.524085  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:30.528584  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:30.528654  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:30.560131  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:30.560157  201585 cri.go:89] found id: ""
	I1205 07:17:30.560166  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:30.560219  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:30.564855  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:30.564923  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:30.590516  201585 cri.go:89] found id: ""
	I1205 07:17:30.590539  201585 logs.go:282] 0 containers: []
	W1205 07:17:30.590547  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:30.590553  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:30.590610  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:30.616349  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:30.616367  201585 cri.go:89] found id: ""
	I1205 07:17:30.616375  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:30.616428  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:30.620696  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:30.620807  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:30.649437  201585 cri.go:89] found id: ""
	I1205 07:17:30.649499  201585 logs.go:282] 0 containers: []
	W1205 07:17:30.649524  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:30.649545  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:30.649633  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:30.674638  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:30.674662  201585 cri.go:89] found id: ""
	I1205 07:17:30.674670  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:30.674740  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:30.679224  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:30.679290  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:30.705949  201585 cri.go:89] found id: ""
	I1205 07:17:30.705973  201585 logs.go:282] 0 containers: []
	W1205 07:17:30.705981  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:30.705987  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:30.706046  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:30.731851  201585 cri.go:89] found id: ""
	I1205 07:17:30.731873  201585 logs.go:282] 0 containers: []
	W1205 07:17:30.731881  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:30.731897  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:30.731908  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:30.790919  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:30.790949  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:30.860194  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:30.860217  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:30.860230  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:30.892412  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:30.892441  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:30.929425  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:30.929456  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:30.959340  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:30.959368  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:30.999107  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:30.999142  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:31.015687  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:31.015719  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:31.051360  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:31.051392  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:33.588374  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:33.600461  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:33.600529  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:33.637110  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:33.637132  201585 cri.go:89] found id: ""
	I1205 07:17:33.637140  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:33.637228  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:33.641494  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:33.641568  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:33.667501  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:33.667522  201585 cri.go:89] found id: ""
	I1205 07:17:33.667530  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:33.667593  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:33.672150  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:33.672221  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:33.697494  201585 cri.go:89] found id: ""
	I1205 07:17:33.697515  201585 logs.go:282] 0 containers: []
	W1205 07:17:33.697523  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:33.697530  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:33.697589  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:33.724440  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:33.724462  201585 cri.go:89] found id: ""
	I1205 07:17:33.724470  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:33.724528  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:33.728989  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:33.729060  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:33.754276  201585 cri.go:89] found id: ""
	I1205 07:17:33.754300  201585 logs.go:282] 0 containers: []
	W1205 07:17:33.754309  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:33.754315  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:33.754371  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:33.784263  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:33.784287  201585 cri.go:89] found id: ""
	I1205 07:17:33.784295  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:33.784351  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:33.788938  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:33.789008  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:33.817940  201585 cri.go:89] found id: ""
	I1205 07:17:33.817969  201585 logs.go:282] 0 containers: []
	W1205 07:17:33.817978  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:33.817984  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:33.818042  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:33.843585  201585 cri.go:89] found id: ""
	I1205 07:17:33.843608  201585 logs.go:282] 0 containers: []
	W1205 07:17:33.843618  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:33.843631  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:33.843644  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:33.903962  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:33.903998  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:33.917891  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:33.917918  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:33.989102  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:33.989124  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:33.989136  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:34.033787  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:34.033825  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:34.065289  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:34.065322  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:34.099552  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:34.099585  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:34.136101  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:34.136136  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:34.189979  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:34.190012  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:36.730320  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:36.742799  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:36.742872  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:36.769933  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:36.769952  201585 cri.go:89] found id: ""
	I1205 07:17:36.769960  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:36.770016  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:36.774533  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:36.774604  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:36.803053  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:36.803076  201585 cri.go:89] found id: ""
	I1205 07:17:36.803084  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:36.803142  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:36.807696  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:36.807773  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:36.835227  201585 cri.go:89] found id: ""
	I1205 07:17:36.835249  201585 logs.go:282] 0 containers: []
	W1205 07:17:36.835258  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:36.835264  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:36.835349  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:36.865550  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:36.865572  201585 cri.go:89] found id: ""
	I1205 07:17:36.865580  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:36.865661  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:36.870082  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:36.870182  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:36.894712  201585 cri.go:89] found id: ""
	I1205 07:17:36.894736  201585 logs.go:282] 0 containers: []
	W1205 07:17:36.894744  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:36.894751  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:36.894839  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:36.920048  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:36.920070  201585 cri.go:89] found id: ""
	I1205 07:17:36.920078  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:36.920161  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:36.924575  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:36.924664  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:36.951839  201585 cri.go:89] found id: ""
	I1205 07:17:36.951864  201585 logs.go:282] 0 containers: []
	W1205 07:17:36.951873  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:36.951913  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:36.951995  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:36.979400  201585 cri.go:89] found id: ""
	I1205 07:17:36.979462  201585 logs.go:282] 0 containers: []
	W1205 07:17:36.979483  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:36.979500  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:36.979512  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:37.038218  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:37.038252  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:37.121115  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:37.121136  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:37.121149  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:37.162199  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:37.162239  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:37.199542  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:37.199571  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:37.233451  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:37.233479  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:37.266055  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:37.266086  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:37.301427  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:37.301454  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:37.314530  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:37.314556  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:39.859274  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:39.870177  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:39.870247  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:39.897201  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:39.897226  201585 cri.go:89] found id: ""
	I1205 07:17:39.897234  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:39.897289  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:39.901771  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:39.901844  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:39.927218  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:39.927240  201585 cri.go:89] found id: ""
	I1205 07:17:39.927248  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:39.927301  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:39.931851  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:39.931921  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:39.967646  201585 cri.go:89] found id: ""
	I1205 07:17:39.967669  201585 logs.go:282] 0 containers: []
	W1205 07:17:39.967677  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:39.967683  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:39.967743  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:39.998129  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:39.998151  201585 cri.go:89] found id: ""
	I1205 07:17:39.998160  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:39.998232  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:40.006016  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:40.006129  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:40.039996  201585 cri.go:89] found id: ""
	I1205 07:17:40.040023  201585 logs.go:282] 0 containers: []
	W1205 07:17:40.040033  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:40.040040  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:40.040103  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:40.067786  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:40.067809  201585 cri.go:89] found id: ""
	I1205 07:17:40.067817  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:40.067882  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:40.072579  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:40.072676  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:40.102026  201585 cri.go:89] found id: ""
	I1205 07:17:40.102050  201585 logs.go:282] 0 containers: []
	W1205 07:17:40.102059  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:40.102065  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:40.102136  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:40.138490  201585 cri.go:89] found id: ""
	I1205 07:17:40.138527  201585 logs.go:282] 0 containers: []
	W1205 07:17:40.138536  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:40.138550  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:40.138561  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:40.152851  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:40.152885  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:40.191122  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:40.191196  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:40.228781  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:40.228809  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:40.260666  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:40.260694  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:40.294528  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:40.294563  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:40.356112  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:40.356149  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:40.419595  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:40.419616  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:40.419630  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:40.463932  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:40.463964  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:43.007823  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:43.019827  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:43.019902  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:43.045796  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:43.045818  201585 cri.go:89] found id: ""
	I1205 07:17:43.045825  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:43.045880  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:43.050490  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:43.050562  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:43.078079  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:43.078107  201585 cri.go:89] found id: ""
	I1205 07:17:43.078115  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:43.078170  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:43.083029  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:43.083162  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:43.135546  201585 cri.go:89] found id: ""
	I1205 07:17:43.135619  201585 logs.go:282] 0 containers: []
	W1205 07:17:43.135641  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:43.135662  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:43.135750  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:43.179024  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:43.179046  201585 cri.go:89] found id: ""
	I1205 07:17:43.179055  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:43.179113  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:43.191879  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:43.192003  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:43.226945  201585 cri.go:89] found id: ""
	I1205 07:17:43.227019  201585 logs.go:282] 0 containers: []
	W1205 07:17:43.227042  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:43.227060  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:43.227145  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:43.260197  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:43.260257  201585 cri.go:89] found id: ""
	I1205 07:17:43.260290  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:43.260374  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:43.264863  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:43.264934  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:43.291243  201585 cri.go:89] found id: ""
	I1205 07:17:43.291269  201585 logs.go:282] 0 containers: []
	W1205 07:17:43.291278  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:43.291284  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:43.291341  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:43.323061  201585 cri.go:89] found id: ""
	I1205 07:17:43.323086  201585 logs.go:282] 0 containers: []
	W1205 07:17:43.323095  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:43.323109  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:43.323121  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:43.337409  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:43.337438  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:43.415834  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:43.415856  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:43.415870  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:43.450413  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:43.450459  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:43.489596  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:43.489625  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:43.523474  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:43.523511  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:43.554074  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:43.554109  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:43.622137  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:43.622172  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:43.658929  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:43.658959  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:46.190805  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:46.204591  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:46.204671  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:46.238748  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:46.238776  201585 cri.go:89] found id: ""
	I1205 07:17:46.238785  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:46.238841  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:46.243354  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:46.243422  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:46.269810  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:46.269830  201585 cri.go:89] found id: ""
	I1205 07:17:46.269838  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:46.269898  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:46.274542  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:46.274691  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:46.303253  201585 cri.go:89] found id: ""
	I1205 07:17:46.303279  201585 logs.go:282] 0 containers: []
	W1205 07:17:46.303287  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:46.303293  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:46.303352  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:46.329286  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:46.329310  201585 cri.go:89] found id: ""
	I1205 07:17:46.329318  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:46.329374  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:46.334054  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:46.334135  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:46.364407  201585 cri.go:89] found id: ""
	I1205 07:17:46.364432  201585 logs.go:282] 0 containers: []
	W1205 07:17:46.364441  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:46.364447  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:46.364509  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:46.390735  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:46.390759  201585 cri.go:89] found id: ""
	I1205 07:17:46.390767  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:46.390823  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:46.396365  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:46.396434  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:46.426506  201585 cri.go:89] found id: ""
	I1205 07:17:46.426580  201585 logs.go:282] 0 containers: []
	W1205 07:17:46.426604  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:46.426632  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:46.426738  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:46.453599  201585 cri.go:89] found id: ""
	I1205 07:17:46.453625  201585 logs.go:282] 0 containers: []
	W1205 07:17:46.453635  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:46.453650  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:46.453663  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:46.492046  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:46.492079  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:46.524557  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:46.524586  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:46.555775  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:46.555806  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:46.589870  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:46.589904  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:46.627643  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:46.627679  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:46.657276  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:46.657305  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:46.718331  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:46.718366  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:46.732741  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:46.732769  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:46.800013  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:49.300309  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:49.311435  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:49.311503  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:49.336981  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:49.337000  201585 cri.go:89] found id: ""
	I1205 07:17:49.337008  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:49.337064  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:49.341764  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:49.341854  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:49.370946  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:49.370968  201585 cri.go:89] found id: ""
	I1205 07:17:49.370976  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:49.371051  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:49.375525  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:49.375598  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:49.402258  201585 cri.go:89] found id: ""
	I1205 07:17:49.402283  201585 logs.go:282] 0 containers: []
	W1205 07:17:49.402292  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:49.402298  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:49.402357  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:49.428254  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:49.428277  201585 cri.go:89] found id: ""
	I1205 07:17:49.428285  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:49.428344  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:49.433000  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:49.433072  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:49.459690  201585 cri.go:89] found id: ""
	I1205 07:17:49.459714  201585 logs.go:282] 0 containers: []
	W1205 07:17:49.459723  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:49.459733  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:49.459789  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:49.485077  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:49.485101  201585 cri.go:89] found id: ""
	I1205 07:17:49.485110  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:49.485191  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:49.489535  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:49.489606  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:49.525078  201585 cri.go:89] found id: ""
	I1205 07:17:49.525103  201585 logs.go:282] 0 containers: []
	W1205 07:17:49.525112  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:49.525118  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:49.525204  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:49.551833  201585 cri.go:89] found id: ""
	I1205 07:17:49.551859  201585 logs.go:282] 0 containers: []
	W1205 07:17:49.551868  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:49.551886  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:49.551903  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:49.624259  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:49.624278  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:49.624291  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:49.663835  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:49.663867  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:49.700296  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:49.700329  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:49.747730  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:49.747756  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:49.785270  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:49.785303  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:49.824643  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:49.824671  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:49.917792  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:49.917829  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:49.936445  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:49.936473  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:52.474622  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:52.486184  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:52.486255  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:52.514346  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:52.514370  201585 cri.go:89] found id: ""
	I1205 07:17:52.514379  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:52.514437  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:52.519176  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:52.519260  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:52.547035  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:52.547058  201585 cri.go:89] found id: ""
	I1205 07:17:52.547066  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:52.547122  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:52.551599  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:52.551680  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:52.578680  201585 cri.go:89] found id: ""
	I1205 07:17:52.578704  201585 logs.go:282] 0 containers: []
	W1205 07:17:52.578712  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:52.578719  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:52.578781  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:52.605367  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:52.605390  201585 cri.go:89] found id: ""
	I1205 07:17:52.605398  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:52.605454  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:52.610047  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:52.610126  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:52.636789  201585 cri.go:89] found id: ""
	I1205 07:17:52.636810  201585 logs.go:282] 0 containers: []
	W1205 07:17:52.636818  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:52.636824  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:52.636882  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:52.666973  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:52.666994  201585 cri.go:89] found id: ""
	I1205 07:17:52.667002  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:52.667060  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:52.671499  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:52.671601  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:52.696895  201585 cri.go:89] found id: ""
	I1205 07:17:52.696966  201585 logs.go:282] 0 containers: []
	W1205 07:17:52.696989  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:52.697007  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:52.697090  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:52.723091  201585 cri.go:89] found id: ""
	I1205 07:17:52.723156  201585 logs.go:282] 0 containers: []
	W1205 07:17:52.723174  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:52.723188  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:52.723200  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:52.782513  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:52.782549  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:52.825580  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:52.825611  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:52.863006  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:52.863043  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:52.910842  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:52.910871  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:52.925397  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:52.925425  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:52.992466  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:52.992487  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:52.992500  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:53.028768  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:53.028800  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:53.061302  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:53.061330  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:55.592299  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:55.603400  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:55.603464  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:55.632982  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:55.633004  201585 cri.go:89] found id: ""
	I1205 07:17:55.633012  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:55.633072  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:55.637871  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:55.637943  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:55.663303  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:55.663325  201585 cri.go:89] found id: ""
	I1205 07:17:55.663334  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:55.663389  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:55.667985  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:55.668059  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:55.693991  201585 cri.go:89] found id: ""
	I1205 07:17:55.694014  201585 logs.go:282] 0 containers: []
	W1205 07:17:55.694023  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:55.694036  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:55.694098  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:55.719975  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:55.720000  201585 cri.go:89] found id: ""
	I1205 07:17:55.720009  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:55.720104  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:55.724694  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:55.724771  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:55.750451  201585 cri.go:89] found id: ""
	I1205 07:17:55.750516  201585 logs.go:282] 0 containers: []
	W1205 07:17:55.750531  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:55.750538  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:55.750608  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:55.784655  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:55.784679  201585 cri.go:89] found id: ""
	I1205 07:17:55.784687  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:55.784740  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:55.789294  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:55.789415  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:55.815628  201585 cri.go:89] found id: ""
	I1205 07:17:55.815656  201585 logs.go:282] 0 containers: []
	W1205 07:17:55.815665  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:55.815673  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:55.815732  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:55.841639  201585 cri.go:89] found id: ""
	I1205 07:17:55.841661  201585 logs.go:282] 0 containers: []
	W1205 07:17:55.841670  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:55.841687  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:55.841700  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:55.856123  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:55.856150  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:17:55.902414  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:55.902440  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:55.967735  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:55.967771  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:56.040709  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:56.040772  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:56.040794  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:56.079956  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:56.079989  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:56.112772  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:56.112811  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:56.151205  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:56.151236  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:56.181324  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:56.181352  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:58.714284  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:17:58.725266  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:17:58.725338  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:17:58.756032  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:58.756055  201585 cri.go:89] found id: ""
	I1205 07:17:58.756063  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:17:58.756120  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:58.760595  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:17:58.760668  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:17:58.787252  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:58.787276  201585 cri.go:89] found id: ""
	I1205 07:17:58.787284  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:17:58.787366  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:58.791877  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:17:58.791946  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:17:58.817043  201585 cri.go:89] found id: ""
	I1205 07:17:58.817064  201585 logs.go:282] 0 containers: []
	W1205 07:17:58.817078  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:17:58.817085  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:17:58.817143  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:17:58.848445  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:58.848463  201585 cri.go:89] found id: ""
	I1205 07:17:58.848471  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:17:58.848526  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:58.853394  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:17:58.853475  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:17:58.887411  201585 cri.go:89] found id: ""
	I1205 07:17:58.887434  201585 logs.go:282] 0 containers: []
	W1205 07:17:58.887442  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:17:58.887447  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:17:58.887502  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:17:58.918717  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:58.918788  201585 cri.go:89] found id: ""
	I1205 07:17:58.918809  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:17:58.918886  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:17:58.923265  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:17:58.923368  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:17:58.948397  201585 cri.go:89] found id: ""
	I1205 07:17:58.948423  201585 logs.go:282] 0 containers: []
	W1205 07:17:58.948432  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:17:58.948438  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:17:58.948499  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:17:58.973036  201585 cri.go:89] found id: ""
	I1205 07:17:58.973058  201585 logs.go:282] 0 containers: []
	W1205 07:17:58.973066  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:17:58.973079  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:17:58.973110  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:17:59.009445  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:17:59.009478  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:17:59.050150  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:17:59.050180  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:17:59.086761  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:17:59.086793  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:17:59.119896  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:17:59.119924  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:17:59.183885  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:17:59.183922  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:17:59.197595  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:17:59.197620  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:17:59.264307  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:17:59.264325  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:17:59.264338  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:17:59.298206  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:17:59.298241  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:01.829683  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:01.842623  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:01.842699  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:01.915758  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:01.915782  201585 cri.go:89] found id: ""
	I1205 07:18:01.915790  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:01.915844  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:01.921013  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:01.921085  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:01.972392  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:01.972419  201585 cri.go:89] found id: ""
	I1205 07:18:01.972427  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:01.972494  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:01.977902  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:01.977978  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:02.015186  201585 cri.go:89] found id: ""
	I1205 07:18:02.015217  201585 logs.go:282] 0 containers: []
	W1205 07:18:02.015225  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:02.015232  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:02.015299  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:02.079667  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:02.079692  201585 cri.go:89] found id: ""
	I1205 07:18:02.079700  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:02.079753  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:02.084419  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:02.084493  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:02.132631  201585 cri.go:89] found id: ""
	I1205 07:18:02.132677  201585 logs.go:282] 0 containers: []
	W1205 07:18:02.132687  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:02.132694  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:02.132765  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:02.163097  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:02.163119  201585 cri.go:89] found id: ""
	I1205 07:18:02.163127  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:02.163189  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:02.167950  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:02.168022  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:02.199202  201585 cri.go:89] found id: ""
	I1205 07:18:02.199224  201585 logs.go:282] 0 containers: []
	W1205 07:18:02.199233  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:02.199240  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:02.199297  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:02.227791  201585 cri.go:89] found id: ""
	I1205 07:18:02.227817  201585 logs.go:282] 0 containers: []
	W1205 07:18:02.227827  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:02.227842  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:02.227854  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:02.261688  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:02.261717  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:02.321893  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:02.321926  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:02.397830  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:02.397853  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:02.397888  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:02.433288  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:02.433319  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:02.468454  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:02.468485  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:02.482712  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:02.482742  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:02.518966  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:02.518997  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:02.556511  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:02.556545  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:05.091621  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:05.104879  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:05.104985  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:05.154118  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:05.154140  201585 cri.go:89] found id: ""
	I1205 07:18:05.154148  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:05.154208  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:05.158864  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:05.158938  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:05.191659  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:05.191688  201585 cri.go:89] found id: ""
	I1205 07:18:05.191696  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:05.191751  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:05.196536  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:05.196637  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:05.252357  201585 cri.go:89] found id: ""
	I1205 07:18:05.252385  201585 logs.go:282] 0 containers: []
	W1205 07:18:05.252395  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:05.252408  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:05.252474  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:05.294081  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:05.294116  201585 cri.go:89] found id: ""
	I1205 07:18:05.294130  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:05.294187  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:05.301053  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:05.301126  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:05.336047  201585 cri.go:89] found id: ""
	I1205 07:18:05.336075  201585 logs.go:282] 0 containers: []
	W1205 07:18:05.336084  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:05.336090  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:05.336146  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:05.378243  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:05.378274  201585 cri.go:89] found id: ""
	I1205 07:18:05.378282  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:05.378341  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:05.383703  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:05.383798  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:05.416109  201585 cri.go:89] found id: ""
	I1205 07:18:05.416138  201585 logs.go:282] 0 containers: []
	W1205 07:18:05.416147  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:05.416155  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:05.416212  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:05.449247  201585 cri.go:89] found id: ""
	I1205 07:18:05.449275  201585 logs.go:282] 0 containers: []
	W1205 07:18:05.449284  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:05.449298  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:05.449309  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:05.535324  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:05.535402  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:05.663570  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:05.663592  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:05.663607  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:05.751549  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:05.751631  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:05.787222  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:05.787251  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:05.803143  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:05.803168  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:05.846035  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:05.846136  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:05.889666  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:05.889703  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:05.928100  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:05.928130  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:08.462353  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:08.475165  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:08.475231  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:08.511777  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:08.511797  201585 cri.go:89] found id: ""
	I1205 07:18:08.511806  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:08.511866  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:08.518669  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:08.518737  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:08.555411  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:08.555474  201585 cri.go:89] found id: ""
	I1205 07:18:08.555485  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:08.555568  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:08.561284  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:08.561397  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:08.609753  201585 cri.go:89] found id: ""
	I1205 07:18:08.609819  201585 logs.go:282] 0 containers: []
	W1205 07:18:08.609839  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:08.609855  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:08.609940  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:08.672221  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:08.672283  201585 cri.go:89] found id: ""
	I1205 07:18:08.672303  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:08.672386  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:08.686557  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:08.686667  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:08.726486  201585 cri.go:89] found id: ""
	I1205 07:18:08.726548  201585 logs.go:282] 0 containers: []
	W1205 07:18:08.726568  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:08.726585  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:08.726666  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:08.758160  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:08.758238  201585 cri.go:89] found id: ""
	I1205 07:18:08.758260  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:08.758352  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:08.764276  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:08.764396  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:08.794707  201585 cri.go:89] found id: ""
	I1205 07:18:08.794785  201585 logs.go:282] 0 containers: []
	W1205 07:18:08.794809  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:08.794827  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:08.794917  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:08.834014  201585 cri.go:89] found id: ""
	I1205 07:18:08.834089  201585 logs.go:282] 0 containers: []
	W1205 07:18:08.834118  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:08.834157  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:08.834184  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:08.872316  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:08.872382  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:08.920473  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:08.921400  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:08.956364  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:08.956395  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:08.992536  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:08.992560  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:09.072331  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:09.072405  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:09.120256  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:09.120452  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:09.160975  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:09.161049  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:09.182203  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:09.182284  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:09.271142  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:11.772264  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:11.783483  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:11.783560  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:11.810673  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:11.810695  201585 cri.go:89] found id: ""
	I1205 07:18:11.810703  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:11.810762  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:11.815324  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:11.815394  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:11.843282  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:11.843313  201585 cri.go:89] found id: ""
	I1205 07:18:11.843322  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:11.843379  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:11.848128  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:11.848206  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:11.873777  201585 cri.go:89] found id: ""
	I1205 07:18:11.873799  201585 logs.go:282] 0 containers: []
	W1205 07:18:11.873808  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:11.873815  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:11.873875  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:11.899742  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:11.899764  201585 cri.go:89] found id: ""
	I1205 07:18:11.899773  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:11.899830  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:11.904462  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:11.904537  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:11.929681  201585 cri.go:89] found id: ""
	I1205 07:18:11.929702  201585 logs.go:282] 0 containers: []
	W1205 07:18:11.929711  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:11.929717  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:11.929776  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:11.959361  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:11.959383  201585 cri.go:89] found id: ""
	I1205 07:18:11.959391  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:11.959456  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:11.963857  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:11.963928  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:11.989050  201585 cri.go:89] found id: ""
	I1205 07:18:11.989075  201585 logs.go:282] 0 containers: []
	W1205 07:18:11.989084  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:11.989090  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:11.989146  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:12.018937  201585 cri.go:89] found id: ""
	I1205 07:18:12.018963  201585 logs.go:282] 0 containers: []
	W1205 07:18:12.018973  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:12.018989  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:12.019000  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:12.090488  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:12.090533  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:12.141968  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:12.142000  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:12.185410  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:12.185442  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:12.219280  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:12.219311  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:12.275925  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:12.275955  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:12.293282  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:12.293312  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:12.396471  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:12.396492  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:12.396505  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:12.455102  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:12.455134  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:14.995909  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:15.008945  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:15.009036  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:15.044252  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:15.044274  201585 cri.go:89] found id: ""
	I1205 07:18:15.044283  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:15.044350  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:15.049543  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:15.049618  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:15.076844  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:15.076867  201585 cri.go:89] found id: ""
	I1205 07:18:15.076875  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:15.076929  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:15.081842  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:15.081921  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:15.108304  201585 cri.go:89] found id: ""
	I1205 07:18:15.108329  201585 logs.go:282] 0 containers: []
	W1205 07:18:15.108337  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:15.108344  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:15.108403  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:15.140373  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:15.140393  201585 cri.go:89] found id: ""
	I1205 07:18:15.140401  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:15.140455  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:15.145037  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:15.145105  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:15.171032  201585 cri.go:89] found id: ""
	I1205 07:18:15.171057  201585 logs.go:282] 0 containers: []
	W1205 07:18:15.171065  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:15.171073  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:15.171131  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:15.197136  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:15.197168  201585 cri.go:89] found id: ""
	I1205 07:18:15.197175  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:15.197234  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:15.201584  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:15.201656  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:15.226673  201585 cri.go:89] found id: ""
	I1205 07:18:15.226696  201585 logs.go:282] 0 containers: []
	W1205 07:18:15.226705  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:15.226711  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:15.226766  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:15.251629  201585 cri.go:89] found id: ""
	I1205 07:18:15.251654  201585 logs.go:282] 0 containers: []
	W1205 07:18:15.251663  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:15.251676  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:15.251696  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:15.285685  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:15.285714  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:15.318344  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:15.318376  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:15.352168  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:15.352200  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:15.413571  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:15.413606  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:15.427065  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:15.427092  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:15.495373  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:15.495396  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:15.495409  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:15.532874  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:15.532902  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:15.568471  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:15.568503  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:18.122556  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:18.133783  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:18.133858  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:18.163643  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:18.163665  201585 cri.go:89] found id: ""
	I1205 07:18:18.163673  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:18.163732  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:18.168337  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:18.168411  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:18.194735  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:18.194762  201585 cri.go:89] found id: ""
	I1205 07:18:18.194771  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:18.194825  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:18.199605  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:18.199678  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:18.226656  201585 cri.go:89] found id: ""
	I1205 07:18:18.226678  201585 logs.go:282] 0 containers: []
	W1205 07:18:18.226687  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:18.226693  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:18.226798  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:18.253042  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:18.253066  201585 cri.go:89] found id: ""
	I1205 07:18:18.253075  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:18.253133  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:18.257942  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:18.258043  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:18.283004  201585 cri.go:89] found id: ""
	I1205 07:18:18.283027  201585 logs.go:282] 0 containers: []
	W1205 07:18:18.283036  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:18.283042  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:18.283104  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:18.310965  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:18.310987  201585 cri.go:89] found id: ""
	I1205 07:18:18.310995  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:18.311070  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:18.315629  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:18.315701  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:18.341463  201585 cri.go:89] found id: ""
	I1205 07:18:18.341496  201585 logs.go:282] 0 containers: []
	W1205 07:18:18.341506  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:18.341512  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:18.341573  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:18.367122  201585 cri.go:89] found id: ""
	I1205 07:18:18.367144  201585 logs.go:282] 0 containers: []
	W1205 07:18:18.367153  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:18.367195  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:18.367213  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:18.380479  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:18.380507  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:18.446071  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:18.446093  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:18.446114  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:18.483879  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:18.483911  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:18.514644  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:18.514671  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:18.549516  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:18.549552  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:18.578209  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:18.578234  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:18.641011  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:18.641045  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:18.692451  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:18.692483  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:21.231264  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:21.242035  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:21.242114  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:21.267974  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:21.267997  201585 cri.go:89] found id: ""
	I1205 07:18:21.268006  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:21.268063  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:21.272461  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:21.272533  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:21.301968  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:21.301991  201585 cri.go:89] found id: ""
	I1205 07:18:21.302000  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:21.302055  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:21.306489  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:21.306558  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:21.332308  201585 cri.go:89] found id: ""
	I1205 07:18:21.332332  201585 logs.go:282] 0 containers: []
	W1205 07:18:21.332341  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:21.332347  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:21.332403  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:21.357518  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:21.357540  201585 cri.go:89] found id: ""
	I1205 07:18:21.357548  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:21.357613  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:21.362067  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:21.362145  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:21.391989  201585 cri.go:89] found id: ""
	I1205 07:18:21.392023  201585 logs.go:282] 0 containers: []
	W1205 07:18:21.392032  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:21.392038  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:21.392108  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:21.417847  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:21.417868  201585 cri.go:89] found id: ""
	I1205 07:18:21.417876  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:21.417932  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:21.422581  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:21.422742  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:21.447685  201585 cri.go:89] found id: ""
	I1205 07:18:21.447711  201585 logs.go:282] 0 containers: []
	W1205 07:18:21.447719  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:21.447725  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:21.447782  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:21.473448  201585 cri.go:89] found id: ""
	I1205 07:18:21.473470  201585 logs.go:282] 0 containers: []
	W1205 07:18:21.473478  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:21.473496  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:21.473507  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:21.507798  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:21.507832  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:21.540439  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:21.540467  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:21.573711  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:21.573743  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:21.619734  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:21.619769  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:21.634834  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:21.634861  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:21.684057  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:21.684089  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:21.715385  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:21.715416  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:21.778254  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:21.778285  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:21.847013  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:24.347319  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:24.359263  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:24.359346  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:24.387936  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:24.387959  201585 cri.go:89] found id: ""
	I1205 07:18:24.387966  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:24.388019  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:24.392566  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:24.392699  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:24.418869  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:24.418890  201585 cri.go:89] found id: ""
	I1205 07:18:24.418898  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:24.418956  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:24.423528  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:24.423596  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:24.452170  201585 cri.go:89] found id: ""
	I1205 07:18:24.452195  201585 logs.go:282] 0 containers: []
	W1205 07:18:24.452205  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:24.452211  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:24.452270  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:24.478484  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:24.478504  201585 cri.go:89] found id: ""
	I1205 07:18:24.478512  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:24.478567  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:24.483123  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:24.483193  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:24.510649  201585 cri.go:89] found id: ""
	I1205 07:18:24.510716  201585 logs.go:282] 0 containers: []
	W1205 07:18:24.510739  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:24.510756  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:24.510851  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:24.539132  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:24.539155  201585 cri.go:89] found id: ""
	I1205 07:18:24.539163  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:24.539215  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:24.543637  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:24.543713  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:24.569916  201585 cri.go:89] found id: ""
	I1205 07:18:24.569942  201585 logs.go:282] 0 containers: []
	W1205 07:18:24.569951  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:24.569958  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:24.570044  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:24.609699  201585 cri.go:89] found id: ""
	I1205 07:18:24.609771  201585 logs.go:282] 0 containers: []
	W1205 07:18:24.609793  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:24.609826  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:24.609857  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:24.624368  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:24.624431  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:24.703520  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:24.703589  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:24.703606  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:24.746707  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:24.746735  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:24.786175  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:24.786205  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:24.830453  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:24.830484  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:24.866242  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:24.866276  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:24.899253  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:24.899279  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:24.962105  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:24.962140  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:27.493057  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:27.504929  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:27.505000  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:27.529968  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:27.529990  201585 cri.go:89] found id: ""
	I1205 07:18:27.529999  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:27.530053  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:27.534558  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:27.534626  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:27.561533  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:27.561558  201585 cri.go:89] found id: ""
	I1205 07:18:27.561566  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:27.561625  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:27.566306  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:27.566381  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:27.603880  201585 cri.go:89] found id: ""
	I1205 07:18:27.603905  201585 logs.go:282] 0 containers: []
	W1205 07:18:27.603914  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:27.603921  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:27.603977  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:27.633508  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:27.633529  201585 cri.go:89] found id: ""
	I1205 07:18:27.633538  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:27.633597  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:27.639532  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:27.639629  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:27.675255  201585 cri.go:89] found id: ""
	I1205 07:18:27.675276  201585 logs.go:282] 0 containers: []
	W1205 07:18:27.675285  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:27.675291  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:27.675351  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:27.706049  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:27.706070  201585 cri.go:89] found id: ""
	I1205 07:18:27.706078  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:27.706165  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:27.710574  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:27.710649  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:27.742097  201585 cri.go:89] found id: ""
	I1205 07:18:27.742126  201585 logs.go:282] 0 containers: []
	W1205 07:18:27.742135  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:27.742141  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:27.742197  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:27.767467  201585 cri.go:89] found id: ""
	I1205 07:18:27.767490  201585 logs.go:282] 0 containers: []
	W1205 07:18:27.767499  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:27.767512  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:27.767522  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:27.826973  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:27.827008  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:27.840815  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:27.840841  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:27.874274  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:27.874308  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:27.906623  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:27.906652  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:27.943540  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:27.943572  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:27.973831  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:27.973862  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:28.048476  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:28.048499  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:28.048511  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:28.082048  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:28.082083  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:30.622472  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:30.634515  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:30.634582  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:30.670481  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:30.670506  201585 cri.go:89] found id: ""
	I1205 07:18:30.670515  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:30.670570  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:30.675207  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:30.675279  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:30.702709  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:30.702731  201585 cri.go:89] found id: ""
	I1205 07:18:30.702739  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:30.702792  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:30.707225  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:30.707298  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:30.740897  201585 cri.go:89] found id: ""
	I1205 07:18:30.740918  201585 logs.go:282] 0 containers: []
	W1205 07:18:30.740926  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:30.740933  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:30.740987  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:30.766621  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:30.766643  201585 cri.go:89] found id: ""
	I1205 07:18:30.766651  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:30.766726  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:30.771183  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:30.771256  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:30.797629  201585 cri.go:89] found id: ""
	I1205 07:18:30.797653  201585 logs.go:282] 0 containers: []
	W1205 07:18:30.797661  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:30.797667  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:30.797743  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:30.823894  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:30.823914  201585 cri.go:89] found id: ""
	I1205 07:18:30.823922  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:30.823978  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:30.828413  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:30.828527  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:30.854039  201585 cri.go:89] found id: ""
	I1205 07:18:30.854121  201585 logs.go:282] 0 containers: []
	W1205 07:18:30.854146  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:30.854166  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:30.854239  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:30.879410  201585 cri.go:89] found id: ""
	I1205 07:18:30.879475  201585 logs.go:282] 0 containers: []
	W1205 07:18:30.879498  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:30.879525  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:30.879548  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:30.937934  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:30.937968  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:31.004721  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:31.004747  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:31.004768  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:31.040195  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:31.040225  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:31.072252  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:31.072280  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:31.106957  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:31.106990  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:31.136888  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:31.136916  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:31.150647  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:31.150674  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:31.192917  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:31.192952  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:33.729308  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:33.743161  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:33.743233  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:33.768992  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:33.769014  201585 cri.go:89] found id: ""
	I1205 07:18:33.769022  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:33.769082  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:33.773717  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:33.773791  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:33.809083  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:33.809104  201585 cri.go:89] found id: ""
	I1205 07:18:33.809112  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:33.809202  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:33.813904  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:33.813974  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:33.840556  201585 cri.go:89] found id: ""
	I1205 07:18:33.840581  201585 logs.go:282] 0 containers: []
	W1205 07:18:33.840590  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:33.840597  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:33.840657  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:33.869700  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:33.869724  201585 cri.go:89] found id: ""
	I1205 07:18:33.869732  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:33.869817  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:33.874669  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:33.874771  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:33.906801  201585 cri.go:89] found id: ""
	I1205 07:18:33.906826  201585 logs.go:282] 0 containers: []
	W1205 07:18:33.906835  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:33.906841  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:33.906912  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:33.935067  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:33.935089  201585 cri.go:89] found id: ""
	I1205 07:18:33.935098  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:33.935177  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:33.941209  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:33.941310  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:33.969688  201585 cri.go:89] found id: ""
	I1205 07:18:33.969711  201585 logs.go:282] 0 containers: []
	W1205 07:18:33.969721  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:33.969727  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:33.969811  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:33.997933  201585 cri.go:89] found id: ""
	I1205 07:18:33.997958  201585 logs.go:282] 0 containers: []
	W1205 07:18:33.997967  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:33.998011  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:33.998037  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:34.036024  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:34.036059  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:34.098153  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:34.098189  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:34.135619  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:34.135652  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:34.169904  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:34.169977  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:34.205009  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:34.205039  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:34.244246  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:34.244274  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:34.258148  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:34.258176  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:34.321999  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:34.322021  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:34.322034  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:36.878341  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:36.894428  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:36.894538  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:36.934334  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:36.934354  201585 cri.go:89] found id: ""
	I1205 07:18:36.934372  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:36.934445  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:36.947140  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:36.947233  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:36.978556  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:36.978580  201585 cri.go:89] found id: ""
	I1205 07:18:36.978588  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:36.978644  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:36.982846  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:36.982913  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:37.018911  201585 cri.go:89] found id: ""
	I1205 07:18:37.018938  201585 logs.go:282] 0 containers: []
	W1205 07:18:37.018948  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:37.018955  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:37.019023  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:37.049234  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:37.049255  201585 cri.go:89] found id: ""
	I1205 07:18:37.049262  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:37.049317  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:37.053849  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:37.053921  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:37.080732  201585 cri.go:89] found id: ""
	I1205 07:18:37.080759  201585 logs.go:282] 0 containers: []
	W1205 07:18:37.080768  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:37.080778  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:37.080891  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:37.111734  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:37.111761  201585 cri.go:89] found id: ""
	I1205 07:18:37.111769  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:37.111828  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:37.116496  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:37.116572  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:37.143894  201585 cri.go:89] found id: ""
	I1205 07:18:37.143918  201585 logs.go:282] 0 containers: []
	W1205 07:18:37.143926  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:37.143932  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:37.144013  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:37.170173  201585 cri.go:89] found id: ""
	I1205 07:18:37.170198  201585 logs.go:282] 0 containers: []
	W1205 07:18:37.170206  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:37.170220  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:37.170270  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:37.201275  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:37.201308  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:37.230430  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:37.230458  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:37.292181  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:37.292217  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:37.330544  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:37.330571  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:37.367754  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:37.367786  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:37.381456  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:37.381482  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:37.454195  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:37.454216  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:37.454229  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:37.489309  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:37.489343  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:40.026260  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:40.043803  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:40.043874  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:40.089910  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:40.089931  201585 cri.go:89] found id: ""
	I1205 07:18:40.089938  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:40.089992  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:40.095392  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:40.095462  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:40.136867  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:40.136886  201585 cri.go:89] found id: ""
	I1205 07:18:40.136894  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:40.136949  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:40.143466  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:40.143536  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:40.182605  201585 cri.go:89] found id: ""
	I1205 07:18:40.182627  201585 logs.go:282] 0 containers: []
	W1205 07:18:40.182636  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:40.182642  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:40.182707  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:40.221469  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:40.221486  201585 cri.go:89] found id: ""
	I1205 07:18:40.221493  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:40.221545  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:40.226394  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:40.226460  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:40.259606  201585 cri.go:89] found id: ""
	I1205 07:18:40.259628  201585 logs.go:282] 0 containers: []
	W1205 07:18:40.259636  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:40.259642  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:40.259701  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:40.299298  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:40.299316  201585 cri.go:89] found id: ""
	I1205 07:18:40.299325  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:40.299377  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:40.304065  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:40.304128  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:40.342012  201585 cri.go:89] found id: ""
	I1205 07:18:40.342034  201585 logs.go:282] 0 containers: []
	W1205 07:18:40.342042  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:40.342048  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:40.342107  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:40.377264  201585 cri.go:89] found id: ""
	I1205 07:18:40.377285  201585 logs.go:282] 0 containers: []
	W1205 07:18:40.377296  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:40.377308  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:40.377319  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:40.451372  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:40.451402  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:40.486326  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:40.486357  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:40.500134  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:40.500160  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:40.561072  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:40.561095  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:40.561107  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:40.590366  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:40.590397  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:40.622948  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:40.622984  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:40.669297  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:40.669372  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:40.732714  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:40.732747  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:43.268482  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:43.280645  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:43.280712  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:43.314723  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:43.314745  201585 cri.go:89] found id: ""
	I1205 07:18:43.314754  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:43.314811  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:43.320107  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:43.320176  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:43.379485  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:43.379504  201585 cri.go:89] found id: ""
	I1205 07:18:43.379512  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:43.379580  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:43.385062  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:43.385128  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:43.441779  201585 cri.go:89] found id: ""
	I1205 07:18:43.441854  201585 logs.go:282] 0 containers: []
	W1205 07:18:43.441878  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:43.441896  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:43.441977  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:43.497798  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:43.497860  201585 cri.go:89] found id: ""
	I1205 07:18:43.497882  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:43.497965  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:43.504032  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:43.504152  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:43.538409  201585 cri.go:89] found id: ""
	I1205 07:18:43.538474  201585 logs.go:282] 0 containers: []
	W1205 07:18:43.538497  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:43.538515  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:43.538605  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:43.594902  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:43.594965  201585 cri.go:89] found id: ""
	I1205 07:18:43.594987  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:43.595061  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:43.602441  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:43.602556  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:43.651861  201585 cri.go:89] found id: ""
	I1205 07:18:43.651929  201585 logs.go:282] 0 containers: []
	W1205 07:18:43.651952  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:43.651970  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:43.652039  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:43.684639  201585 cri.go:89] found id: ""
	I1205 07:18:43.684701  201585 logs.go:282] 0 containers: []
	W1205 07:18:43.684723  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:43.684750  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:43.684773  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:43.763308  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:43.763345  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:43.778479  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:43.778506  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:43.911654  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:43.911679  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:43.911695  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:43.977185  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:43.977219  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:44.036321  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:44.036352  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:44.096939  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:44.096983  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:44.138595  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:44.138630  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:44.193401  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:44.193480  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:46.747005  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:46.759056  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:46.759127  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:46.795386  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:46.795411  201585 cri.go:89] found id: ""
	I1205 07:18:46.795419  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:46.795483  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:46.800017  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:46.800086  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:46.843554  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:46.843579  201585 cri.go:89] found id: ""
	I1205 07:18:46.843587  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:46.843642  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:46.848900  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:46.848974  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:46.887507  201585 cri.go:89] found id: ""
	I1205 07:18:46.887540  201585 logs.go:282] 0 containers: []
	W1205 07:18:46.887549  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:46.887555  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:46.887613  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:46.918370  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:46.918395  201585 cri.go:89] found id: ""
	I1205 07:18:46.918404  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:46.918458  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:46.923273  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:46.923347  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:46.953794  201585 cri.go:89] found id: ""
	I1205 07:18:46.953819  201585 logs.go:282] 0 containers: []
	W1205 07:18:46.953828  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:46.953835  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:46.953899  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:46.984308  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:46.984348  201585 cri.go:89] found id: ""
	I1205 07:18:46.984356  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:46.984421  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:46.989423  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:46.989508  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:47.027578  201585 cri.go:89] found id: ""
	I1205 07:18:47.027603  201585 logs.go:282] 0 containers: []
	W1205 07:18:47.027612  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:47.027619  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:47.027692  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:47.058611  201585 cri.go:89] found id: ""
	I1205 07:18:47.058652  201585 logs.go:282] 0 containers: []
	W1205 07:18:47.058663  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:47.058682  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:47.058694  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:47.125474  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:47.125510  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:47.214618  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:47.214641  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:47.214654  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:47.263516  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:47.263589  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:47.303302  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:47.303334  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:47.341090  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:47.341122  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:47.392426  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:47.392502  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:47.420468  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:47.420538  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:47.435507  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:47.435575  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:49.972299  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:49.983432  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:49.983549  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:50.022549  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:50.022584  201585 cri.go:89] found id: ""
	I1205 07:18:50.022595  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:50.022666  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:50.027742  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:50.027839  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:50.055472  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:50.055540  201585 cri.go:89] found id: ""
	I1205 07:18:50.055561  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:50.055639  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:50.060577  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:50.060664  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:50.087867  201585 cri.go:89] found id: ""
	I1205 07:18:50.087893  201585 logs.go:282] 0 containers: []
	W1205 07:18:50.087902  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:50.087908  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:50.087966  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:50.114528  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:50.114562  201585 cri.go:89] found id: ""
	I1205 07:18:50.114570  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:50.114635  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:50.119579  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:50.119678  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:50.145730  201585 cri.go:89] found id: ""
	I1205 07:18:50.145758  201585 logs.go:282] 0 containers: []
	W1205 07:18:50.145771  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:50.145777  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:50.145882  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:50.176423  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:50.176445  201585 cri.go:89] found id: ""
	I1205 07:18:50.176453  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:50.176508  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:50.181315  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:50.181387  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:50.210393  201585 cri.go:89] found id: ""
	I1205 07:18:50.210466  201585 logs.go:282] 0 containers: []
	W1205 07:18:50.210481  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:50.210488  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:50.210546  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:50.239497  201585 cri.go:89] found id: ""
	I1205 07:18:50.239523  201585 logs.go:282] 0 containers: []
	W1205 07:18:50.239532  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:50.239546  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:50.239560  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:50.253114  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:50.253145  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:50.287761  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:50.287792  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:50.321687  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:50.321717  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:50.359933  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:50.359963  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:50.399535  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:50.399569  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:50.432923  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:50.432953  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:50.491407  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:50.491443  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:50.557497  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:50.557517  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:50.557530  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:53.094351  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:53.105111  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:18:53.105208  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:18:53.130605  201585 cri.go:89] found id: "907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:53.130626  201585 cri.go:89] found id: ""
	I1205 07:18:53.130634  201585 logs.go:282] 1 containers: [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2]
	I1205 07:18:53.130691  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:53.135062  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:18:53.135129  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:18:53.161762  201585 cri.go:89] found id: "d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:53.161786  201585 cri.go:89] found id: ""
	I1205 07:18:53.161794  201585 logs.go:282] 1 containers: [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f]
	I1205 07:18:53.161848  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:53.166432  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:18:53.166505  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:18:53.192695  201585 cri.go:89] found id: ""
	I1205 07:18:53.192725  201585 logs.go:282] 0 containers: []
	W1205 07:18:53.192735  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:18:53.192741  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:18:53.192812  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:18:53.219039  201585 cri.go:89] found id: "86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:53.219061  201585 cri.go:89] found id: ""
	I1205 07:18:53.219069  201585 logs.go:282] 1 containers: [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae]
	I1205 07:18:53.219144  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:53.223616  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:18:53.223694  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:18:53.249404  201585 cri.go:89] found id: ""
	I1205 07:18:53.249429  201585 logs.go:282] 0 containers: []
	W1205 07:18:53.249438  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:18:53.249444  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:18:53.249502  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:18:53.274962  201585 cri.go:89] found id: "7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:53.274984  201585 cri.go:89] found id: ""
	I1205 07:18:53.274992  201585 logs.go:282] 1 containers: [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96]
	I1205 07:18:53.275047  201585 ssh_runner.go:195] Run: which crictl
	I1205 07:18:53.279440  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:18:53.279507  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:18:53.304756  201585 cri.go:89] found id: ""
	I1205 07:18:53.304781  201585 logs.go:282] 0 containers: []
	W1205 07:18:53.304790  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:18:53.304796  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:18:53.304852  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:18:53.330076  201585 cri.go:89] found id: ""
	I1205 07:18:53.330101  201585 logs.go:282] 0 containers: []
	W1205 07:18:53.330110  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:18:53.330129  201585 logs.go:123] Gathering logs for kube-scheduler [86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae] ...
	I1205 07:18:53.330143  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae"
	I1205 07:18:53.385800  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:18:53.385840  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:18:53.422262  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:18:53.422358  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:18:53.437792  201585 logs.go:123] Gathering logs for kube-controller-manager [7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96] ...
	I1205 07:18:53.437818  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96"
	I1205 07:18:53.469576  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:18:53.469606  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:18:53.503214  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:18:53.503242  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:18:53.561262  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:18:53.561294  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:18:53.630945  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:18:53.630964  201585 logs.go:123] Gathering logs for kube-apiserver [907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2] ...
	I1205 07:18:53.630975  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2"
	I1205 07:18:53.665277  201585 logs.go:123] Gathering logs for etcd [d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f] ...
	I1205 07:18:53.665309  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f"
	I1205 07:18:56.201369  201585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:56.211831  201585 kubeadm.go:602] duration metric: took 4m4.478142469s to restartPrimaryControlPlane
	W1205 07:18:56.211902  201585 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 07:18:56.211964  201585 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:18:56.690313  201585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:18:56.706048  201585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:18:56.716762  201585 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:18:56.716828  201585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:18:56.727689  201585 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:18:56.727708  201585 kubeadm.go:158] found existing configuration files:
	
	I1205 07:18:56.727757  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:18:56.737108  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:18:56.737196  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:18:56.746576  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:18:56.755654  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:18:56.755728  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:18:56.763913  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:18:56.772213  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:18:56.772280  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:18:56.780929  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:18:56.789132  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:18:56.789222  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:18:56.797383  201585 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:18:56.843050  201585 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:18:56.843115  201585 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:18:56.920680  201585 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:18:56.920756  201585 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:18:56.920797  201585 kubeadm.go:319] OS: Linux
	I1205 07:18:56.920847  201585 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:18:56.920900  201585 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:18:56.920950  201585 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:18:56.921002  201585 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:18:56.921054  201585 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:18:56.921105  201585 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:18:56.921183  201585 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:18:56.921237  201585 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:18:56.921289  201585 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:18:56.987932  201585 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:18:56.988060  201585 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:18:56.988162  201585 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:18:56.993739  201585 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:18:57.000390  201585 out.go:252]   - Generating certificates and keys ...
	I1205 07:18:57.000483  201585 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:18:57.000578  201585 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:18:57.000673  201585 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:18:57.000736  201585 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:18:57.000806  201585 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:18:57.000859  201585 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:18:57.000923  201585 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:18:57.001279  201585 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:18:57.003075  201585 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:18:57.005489  201585 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:18:57.006413  201585 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:18:57.006503  201585 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:18:57.371949  201585 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:18:58.675290  201585 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:18:58.901841  201585 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:18:59.111386  201585 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:18:59.222869  201585 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:18:59.223698  201585 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:18:59.227353  201585 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:18:59.230848  201585 out.go:252]   - Booting up control plane ...
	I1205 07:18:59.230958  201585 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:18:59.231046  201585 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:18:59.231440  201585 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:18:59.251355  201585 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:18:59.251532  201585 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:18:59.258815  201585 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:18:59.259322  201585 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:18:59.259380  201585 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:18:59.394770  201585 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:18:59.394926  201585 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:22:59.393441  201585 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00027075s
	I1205 07:22:59.393480  201585 kubeadm.go:319] 
	I1205 07:22:59.393538  201585 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:22:59.393574  201585 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:22:59.393679  201585 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:22:59.393686  201585 kubeadm.go:319] 
	I1205 07:22:59.393790  201585 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:22:59.393822  201585 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:22:59.393852  201585 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:22:59.393856  201585 kubeadm.go:319] 
	I1205 07:22:59.397860  201585 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:22:59.398300  201585 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:22:59.398412  201585 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:22:59.398650  201585 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:22:59.398659  201585 kubeadm.go:319] 
	I1205 07:22:59.398729  201585 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 07:22:59.398843  201585 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00027075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00027075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:22:59.398923  201585 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:22:59.807761  201585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:22:59.822353  201585 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:22:59.822417  201585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:22:59.831473  201585 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:22:59.831497  201585 kubeadm.go:158] found existing configuration files:
	
	I1205 07:22:59.831565  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:22:59.840735  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:22:59.840796  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:22:59.850254  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:22:59.858987  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:22:59.859057  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:22:59.867420  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:22:59.876017  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:22:59.876081  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:22:59.884240  201585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:22:59.893349  201585 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:22:59.893419  201585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:22:59.901789  201585 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:22:59.942770  201585 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:22:59.942994  201585 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:23:00.061103  201585 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:23:00.061207  201585 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:23:00.061247  201585 kubeadm.go:319] OS: Linux
	I1205 07:23:00.061301  201585 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:23:00.061357  201585 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:23:00.061409  201585 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:23:00.061461  201585 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:23:00.061513  201585 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:23:00.061571  201585 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:23:00.061620  201585 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:23:00.061673  201585 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:23:00.061724  201585 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:23:00.232982  201585 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:23:00.233096  201585 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:23:00.250686  201585 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:23:00.250910  201585 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:23:00.286617  201585 out.go:252]   - Generating certificates and keys ...
	I1205 07:23:00.286794  201585 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:23:00.286885  201585 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:23:00.287001  201585 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:23:00.287080  201585 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:23:00.287172  201585 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:23:00.287244  201585 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:23:00.287328  201585 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:23:00.287402  201585 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:23:00.287501  201585 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:23:00.287602  201585 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:23:00.287657  201585 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:23:00.287738  201585 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:23:00.382410  201585 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:23:00.677702  201585 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:23:00.940892  201585 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:23:01.076415  201585 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:23:01.255804  201585 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:23:01.256641  201585 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:23:01.260755  201585 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:23:01.264028  201585 out.go:252]   - Booting up control plane ...
	I1205 07:23:01.264149  201585 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:23:01.264526  201585 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:23:01.265304  201585 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:23:01.287592  201585 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:23:01.287733  201585 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:23:01.296341  201585 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:23:01.296877  201585 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:23:01.296942  201585 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:23:01.453383  201585 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:23:01.453512  201585 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:27:01.453527  201585 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000160045s
	I1205 07:27:01.453563  201585 kubeadm.go:319] 
	I1205 07:27:01.453643  201585 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:27:01.453688  201585 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:27:01.453827  201585 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:27:01.453843  201585 kubeadm.go:319] 
	I1205 07:27:01.453960  201585 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:27:01.454006  201585 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:27:01.454039  201585 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:27:01.454046  201585 kubeadm.go:319] 
	I1205 07:27:01.458588  201585 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:27:01.459097  201585 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:27:01.459219  201585 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:27:01.459499  201585 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:27:01.459508  201585 kubeadm.go:319] 
	I1205 07:27:01.459588  201585 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:27:01.459652  201585 kubeadm.go:403] duration metric: took 12m9.799121584s to StartCluster
	I1205 07:27:01.459690  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:27:01.459773  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:27:01.487360  201585 cri.go:89] found id: ""
	I1205 07:27:01.487395  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.487404  201585 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:27:01.487411  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:27:01.487473  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:27:01.513700  201585 cri.go:89] found id: ""
	I1205 07:27:01.513731  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.513739  201585 logs.go:284] No container was found matching "etcd"
	I1205 07:27:01.513746  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:27:01.513806  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:27:01.538290  201585 cri.go:89] found id: ""
	I1205 07:27:01.538316  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.538325  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:27:01.538332  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:27:01.538389  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:27:01.566506  201585 cri.go:89] found id: ""
	I1205 07:27:01.566527  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.566535  201585 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:27:01.566542  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:27:01.566599  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:27:01.592455  201585 cri.go:89] found id: ""
	I1205 07:27:01.592480  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.592489  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:27:01.592495  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:27:01.592555  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:27:01.619653  201585 cri.go:89] found id: ""
	I1205 07:27:01.619679  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.619687  201585 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:27:01.619695  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:27:01.619751  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:27:01.645975  201585 cri.go:89] found id: ""
	I1205 07:27:01.645999  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.646007  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:27:01.646015  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:27:01.646110  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:27:01.673331  201585 cri.go:89] found id: ""
	I1205 07:27:01.673357  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.673366  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:27:01.673377  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:27:01.673388  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:27:01.742506  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:27:01.742524  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:27:01.742537  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:27:01.783198  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:27:01.783231  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:27:01.820852  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:27:01.820877  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:27:01.887749  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:27:01.887790  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:27:01.903169  201585 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000160045s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:27:01.903245  201585 out.go:285] * 
	* 
	W1205 07:27:01.903299  201585 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000160045s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000160045s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:27:01.903316  201585 out.go:285] * 
	* 
	W1205 07:27:01.905628  201585 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:27:01.910909  201585 out.go:203] 
	W1205 07:27:01.914875  201585 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000160045s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000160045s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:27:01.914927  201585 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:27:01.914948  201585 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:27:01.918208  201585 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-496233 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-496233 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-496233 version --output=json: exit status 1 (91.728544ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-05 07:27:02.591630344 +0000 UTC m=+4916.656529390
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-496233
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-496233:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dcdf3f6431dde2487d0017e67ee37945462200cf3a2d2754e563e34a1e03d5d",
	        "Created": "2025-12-05T07:14:03.666002696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201714,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:14:34.827827798Z",
	            "FinishedAt": "2025-12-05T07:14:33.620898453Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/7dcdf3f6431dde2487d0017e67ee37945462200cf3a2d2754e563e34a1e03d5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dcdf3f6431dde2487d0017e67ee37945462200cf3a2d2754e563e34a1e03d5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dcdf3f6431dde2487d0017e67ee37945462200cf3a2d2754e563e34a1e03d5d/hosts",
	        "LogPath": "/var/lib/docker/containers/7dcdf3f6431dde2487d0017e67ee37945462200cf3a2d2754e563e34a1e03d5d/7dcdf3f6431dde2487d0017e67ee37945462200cf3a2d2754e563e34a1e03d5d-json.log",
	        "Name": "/kubernetes-upgrade-496233",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-496233:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-496233",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dcdf3f6431dde2487d0017e67ee37945462200cf3a2d2754e563e34a1e03d5d",
	                "LowerDir": "/var/lib/docker/overlay2/f3e16523ac7a2d83157749005d1071ee1527ff636695d216c53c26235ea9617d-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f3e16523ac7a2d83157749005d1071ee1527ff636695d216c53c26235ea9617d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f3e16523ac7a2d83157749005d1071ee1527ff636695d216c53c26235ea9617d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f3e16523ac7a2d83157749005d1071ee1527ff636695d216c53c26235ea9617d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-496233",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-496233/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-496233",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-496233",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-496233",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b24156b05d6da145e96bf01170abd773d18542434a1e0abbccd56c8ef84e5c7",
	            "SandboxKey": "/var/run/docker/netns/6b24156b05d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-496233": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:11:ac:32:ca:a5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bcb541774cd87fe39e7b62c82a28593e4df46acdd7176d9c4a2bc6a5b019887b",
	                    "EndpointID": "c9107dec5e70d12ee74a7fa77fc8333d3c15129ff915ff5bb426152c7b6e9854",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-496233",
	                        "7dcdf3f6431d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-496233 -n kubernetes-upgrade-496233
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-496233 -n kubernetes-upgrade-496233: exit status 2 (334.912799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-496233 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p insufficient-storage-698834                                                                                                                        │ insufficient-storage-698834 │ jenkins │ v1.37.0 │ 05 Dec 25 07:12 UTC │ 05 Dec 25 07:12 UTC │
	│ start   │ -p NoKubernetes-912948 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                   │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:12 UTC │                     │
	│ start   │ -p NoKubernetes-912948 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                           │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:12 UTC │ 05 Dec 25 07:13 UTC │
	│ start   │ -p missing-upgrade-486753 --memory=3072 --driver=docker  --container-runtime=containerd                                                               │ missing-upgrade-486753      │ jenkins │ v1.35.0 │ 05 Dec 25 07:12 UTC │ 05 Dec 25 07:13 UTC │
	│ start   │ -p NoKubernetes-912948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                           │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │ 05 Dec 25 07:13 UTC │
	│ delete  │ -p NoKubernetes-912948                                                                                                                                │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │ 05 Dec 25 07:13 UTC │
	│ start   │ -p NoKubernetes-912948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                           │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │ 05 Dec 25 07:13 UTC │
	│ ssh     │ -p NoKubernetes-912948 sudo systemctl is-active --quiet service kubelet                                                                               │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │                     │
	│ stop    │ -p NoKubernetes-912948                                                                                                                                │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │ 05 Dec 25 07:13 UTC │
	│ start   │ -p missing-upgrade-486753 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                        │ missing-upgrade-486753      │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │ 05 Dec 25 07:15 UTC │
	│ start   │ -p NoKubernetes-912948 --driver=docker  --container-runtime=containerd                                                                                │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │ 05 Dec 25 07:13 UTC │
	│ ssh     │ -p NoKubernetes-912948 sudo systemctl is-active --quiet service kubelet                                                                               │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │                     │
	│ delete  │ -p NoKubernetes-912948                                                                                                                                │ NoKubernetes-912948         │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │ 05 Dec 25 07:13 UTC │
	│ start   │ -p kubernetes-upgrade-496233 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd        │ kubernetes-upgrade-496233   │ jenkins │ v1.37.0 │ 05 Dec 25 07:13 UTC │ 05 Dec 25 07:14 UTC │
	│ stop    │ -p kubernetes-upgrade-496233                                                                                                                          │ kubernetes-upgrade-496233   │ jenkins │ v1.37.0 │ 05 Dec 25 07:14 UTC │ 05 Dec 25 07:14 UTC │
	│ start   │ -p kubernetes-upgrade-496233 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd │ kubernetes-upgrade-496233   │ jenkins │ v1.37.0 │ 05 Dec 25 07:14 UTC │                     │
	│ delete  │ -p missing-upgrade-486753                                                                                                                             │ missing-upgrade-486753      │ jenkins │ v1.37.0 │ 05 Dec 25 07:15 UTC │ 05 Dec 25 07:15 UTC │
	│ start   │ -p stopped-upgrade-262727 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                            │ stopped-upgrade-262727      │ jenkins │ v1.35.0 │ 05 Dec 25 07:15 UTC │ 05 Dec 25 07:16 UTC │
	│ stop    │ stopped-upgrade-262727 stop                                                                                                                           │ stopped-upgrade-262727      │ jenkins │ v1.35.0 │ 05 Dec 25 07:16 UTC │ 05 Dec 25 07:16 UTC │
	│ start   │ -p stopped-upgrade-262727 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                        │ stopped-upgrade-262727      │ jenkins │ v1.37.0 │ 05 Dec 25 07:16 UTC │ 05 Dec 25 07:20 UTC │
	│ delete  │ -p stopped-upgrade-262727                                                                                                                             │ stopped-upgrade-262727      │ jenkins │ v1.37.0 │ 05 Dec 25 07:20 UTC │ 05 Dec 25 07:20 UTC │
	│ start   │ -p running-upgrade-217876 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                            │ running-upgrade-217876      │ jenkins │ v1.35.0 │ 05 Dec 25 07:20 UTC │ 05 Dec 25 07:21 UTC │
	│ start   │ -p running-upgrade-217876 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                        │ running-upgrade-217876      │ jenkins │ v1.37.0 │ 05 Dec 25 07:21 UTC │ 05 Dec 25 07:26 UTC │
	│ delete  │ -p running-upgrade-217876                                                                                                                             │ running-upgrade-217876      │ jenkins │ v1.37.0 │ 05 Dec 25 07:26 UTC │ 05 Dec 25 07:26 UTC │
	│ start   │ -p pause-557657 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd                                       │ pause-557657                │ jenkins │ v1.37.0 │ 05 Dec 25 07:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:26:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:26:08.411193  240322 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:26:08.411304  240322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:26:08.411308  240322 out.go:374] Setting ErrFile to fd 2...
	I1205 07:26:08.411312  240322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:26:08.411558  240322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:26:08.412039  240322 out.go:368] Setting JSON to false
	I1205 07:26:08.412973  240322 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7715,"bootTime":1764911853,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:26:08.413035  240322 start.go:143] virtualization:  
	I1205 07:26:08.416354  240322 out.go:179] * [pause-557657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:26:08.420583  240322 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:26:08.420642  240322 notify.go:221] Checking for updates...
	I1205 07:26:08.426987  240322 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:26:08.430095  240322 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:26:08.433376  240322 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:26:08.436434  240322 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:26:08.439399  240322 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:26:08.442964  240322 config.go:182] Loaded profile config "kubernetes-upgrade-496233": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:26:08.443065  240322 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:26:08.466808  240322 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:26:08.466919  240322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:26:08.533909  240322 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:26:08.522953032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:26:08.534030  240322 docker.go:319] overlay module found
	I1205 07:26:08.539146  240322 out.go:179] * Using the docker driver based on user configuration
	I1205 07:26:08.542104  240322 start.go:309] selected driver: docker
	I1205 07:26:08.542114  240322 start.go:927] validating driver "docker" against <nil>
	I1205 07:26:08.542127  240322 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:26:08.542909  240322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:26:08.602802  240322 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:26:08.592907534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:26:08.602951  240322 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:26:08.603165  240322 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:26:08.606113  240322 out.go:179] * Using Docker driver with root privileges
	I1205 07:26:08.610113  240322 cni.go:84] Creating CNI manager for ""
	I1205 07:26:08.610173  240322 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:26:08.610180  240322 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:26:08.610262  240322 start.go:353] cluster config:
	{Name:pause-557657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-557657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:26:08.613435  240322 out.go:179] * Starting "pause-557657" primary control-plane node in "pause-557657" cluster
	I1205 07:26:08.617599  240322 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:26:08.620550  240322 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:26:08.623333  240322 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 07:26:08.623370  240322 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1205 07:26:08.623381  240322 cache.go:65] Caching tarball of preloaded images
	I1205 07:26:08.623474  240322 preload.go:238] Found /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 07:26:08.623482  240322 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1205 07:26:08.623594  240322 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/config.json ...
	I1205 07:26:08.623609  240322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/config.json: {Name:mkedc4561a0d118c4d2c08b318fbd43d37e05907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:08.623755  240322 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:26:08.645364  240322 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:26:08.645375  240322 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:26:08.645388  240322 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:26:08.645419  240322 start.go:360] acquireMachinesLock for pause-557657: {Name:mkf476fac60aa5fc5f13c4495173abea15e8d296 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:26:08.645515  240322 start.go:364] duration metric: took 81.65µs to acquireMachinesLock for "pause-557657"
	I1205 07:26:08.645538  240322 start.go:93] Provisioning new machine with config: &{Name:pause-557657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-557657 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:26:08.645605  240322 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:26:08.649004  240322 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:26:08.649248  240322 start.go:159] libmachine.API.Create for "pause-557657" (driver="docker")
	I1205 07:26:08.649271  240322 client.go:173] LocalClient.Create starting
	I1205 07:26:08.649333  240322 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:26:08.649363  240322 main.go:143] libmachine: Decoding PEM data...
	I1205 07:26:08.649381  240322 main.go:143] libmachine: Parsing certificate...
	I1205 07:26:08.649439  240322 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:26:08.649458  240322 main.go:143] libmachine: Decoding PEM data...
	I1205 07:26:08.649468  240322 main.go:143] libmachine: Parsing certificate...
	I1205 07:26:08.649843  240322 cli_runner.go:164] Run: docker network inspect pause-557657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:26:08.668577  240322 cli_runner.go:211] docker network inspect pause-557657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:26:08.668651  240322 network_create.go:284] running [docker network inspect pause-557657] to gather additional debugging logs...
	I1205 07:26:08.668665  240322 cli_runner.go:164] Run: docker network inspect pause-557657
	W1205 07:26:08.683681  240322 cli_runner.go:211] docker network inspect pause-557657 returned with exit code 1
	I1205 07:26:08.683699  240322 network_create.go:287] error running [docker network inspect pause-557657]: docker network inspect pause-557657: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network pause-557657 not found
	I1205 07:26:08.683711  240322 network_create.go:289] output of [docker network inspect pause-557657]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network pause-557657 not found
	
	** /stderr **
	I1205 07:26:08.683811  240322 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:26:08.700805  240322 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:26:08.701074  240322 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:26:08.701375  240322 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:26:08.701682  240322 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bcb541774cd8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:9a:21:00:d4:f8:06} reservation:<nil>}
	I1205 07:26:08.702072  240322 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b8f00}
	I1205 07:26:08.702101  240322 network_create.go:124] attempt to create docker network pause-557657 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:26:08.702160  240322 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-557657 pause-557657
	I1205 07:26:08.769308  240322 network_create.go:108] docker network pause-557657 192.168.85.0/24 created
	I1205 07:26:08.769329  240322 kic.go:121] calculated static IP "192.168.85.2" for the "pause-557657" container
	I1205 07:26:08.769411  240322 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:26:08.788317  240322 cli_runner.go:164] Run: docker volume create pause-557657 --label name.minikube.sigs.k8s.io=pause-557657 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:26:08.805402  240322 oci.go:103] Successfully created a docker volume pause-557657
	I1205 07:26:08.805491  240322 cli_runner.go:164] Run: docker run --rm --name pause-557657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-557657 --entrypoint /usr/bin/test -v pause-557657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:26:09.350183  240322 oci.go:107] Successfully prepared a docker volume pause-557657
	I1205 07:26:09.350235  240322 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 07:26:09.350244  240322 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 07:26:09.350327  240322 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v pause-557657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 07:26:15.889556  240322 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v pause-557657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (6.539194932s)
	I1205 07:26:15.889575  240322 kic.go:203] duration metric: took 6.539328578s to extract preloaded images to volume ...
	W1205 07:26:15.889717  240322 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:26:15.889826  240322 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:26:15.944067  240322 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname pause-557657 --name pause-557657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-557657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=pause-557657 --network pause-557657 --ip 192.168.85.2 --volume pause-557657:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:26:16.271039  240322 cli_runner.go:164] Run: docker container inspect pause-557657 --format={{.State.Running}}
	I1205 07:26:16.292399  240322 cli_runner.go:164] Run: docker container inspect pause-557657 --format={{.State.Status}}
	I1205 07:26:16.316160  240322 cli_runner.go:164] Run: docker exec pause-557657 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:26:16.370991  240322 oci.go:144] the created container "pause-557657" has a running status.
	I1205 07:26:16.371011  240322 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/pause-557657/id_rsa...
	I1205 07:26:16.483031  240322 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/pause-557657/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:26:16.518921  240322 cli_runner.go:164] Run: docker container inspect pause-557657 --format={{.State.Status}}
	I1205 07:26:16.539617  240322 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:26:16.539628  240322 kic_runner.go:114] Args: [docker exec --privileged pause-557657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:26:16.617394  240322 cli_runner.go:164] Run: docker container inspect pause-557657 --format={{.State.Status}}
	I1205 07:26:16.654001  240322 machine.go:94] provisionDockerMachine start ...
	I1205 07:26:16.654080  240322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-557657
	I1205 07:26:16.682314  240322 main.go:143] libmachine: Using SSH client type: native
	I1205 07:26:16.682644  240322 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1205 07:26:16.682650  240322 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:26:16.683479  240322 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 07:26:19.836905  240322 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-557657
	
	I1205 07:26:19.836921  240322 ubuntu.go:182] provisioning hostname "pause-557657"
	I1205 07:26:19.836990  240322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-557657
	I1205 07:26:19.864722  240322 main.go:143] libmachine: Using SSH client type: native
	I1205 07:26:19.865044  240322 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1205 07:26:19.865053  240322 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-557657 && echo "pause-557657" | sudo tee /etc/hostname
	I1205 07:26:20.025219  240322 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-557657
	
	I1205 07:26:20.025307  240322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-557657
	I1205 07:26:20.044766  240322 main.go:143] libmachine: Using SSH client type: native
	I1205 07:26:20.045064  240322 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1205 07:26:20.045076  240322 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-557657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-557657/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-557657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:26:20.193351  240322 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:26:20.193365  240322 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:26:20.193393  240322 ubuntu.go:190] setting up certificates
	I1205 07:26:20.193404  240322 provision.go:84] configureAuth start
	I1205 07:26:20.193464  240322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-557657
	I1205 07:26:20.210204  240322 provision.go:143] copyHostCerts
	I1205 07:26:20.210268  240322 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:26:20.210275  240322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:26:20.210350  240322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:26:20.210450  240322 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:26:20.210454  240322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:26:20.210479  240322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:26:20.210535  240322 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:26:20.210538  240322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:26:20.210564  240322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:26:20.210616  240322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.pause-557657 san=[127.0.0.1 192.168.85.2 localhost minikube pause-557657]
	I1205 07:26:20.640857  240322 provision.go:177] copyRemoteCerts
	I1205 07:26:20.640910  240322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:26:20.640948  240322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-557657
	I1205 07:26:20.660481  240322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/pause-557657/id_rsa Username:docker}
	I1205 07:26:20.765075  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:26:20.782592  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 07:26:20.800942  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:26:20.819993  240322 provision.go:87] duration metric: took 626.567784ms to configureAuth
	I1205 07:26:20.820023  240322 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:26:20.820204  240322 config.go:182] Loaded profile config "pause-557657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:26:20.820208  240322 machine.go:97] duration metric: took 4.166196967s to provisionDockerMachine
	I1205 07:26:20.820214  240322 client.go:176] duration metric: took 12.170938586s to LocalClient.Create
	I1205 07:26:20.820236  240322 start.go:167] duration metric: took 12.170989081s to libmachine.API.Create "pause-557657"
	I1205 07:26:20.820242  240322 start.go:293] postStartSetup for "pause-557657" (driver="docker")
	I1205 07:26:20.820261  240322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:26:20.820309  240322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:26:20.820350  240322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-557657
	I1205 07:26:20.837375  240322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/pause-557657/id_rsa Username:docker}
	I1205 07:26:20.941645  240322 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:26:20.945144  240322 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:26:20.945183  240322 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:26:20.945193  240322 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:26:20.945250  240322 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:26:20.945333  240322 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:26:20.945439  240322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:26:20.953209  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:26:20.971550  240322 start.go:296] duration metric: took 151.295308ms for postStartSetup
	I1205 07:26:20.971894  240322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-557657
	I1205 07:26:20.988385  240322 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/config.json ...
	I1205 07:26:20.988661  240322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:26:20.988699  240322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-557657
	I1205 07:26:21.007349  240322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/pause-557657/id_rsa Username:docker}
	I1205 07:26:21.110335  240322 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:26:21.115215  240322 start.go:128] duration metric: took 12.469593155s to createHost
	I1205 07:26:21.115231  240322 start.go:83] releasing machines lock for "pause-557657", held for 12.469708799s
	I1205 07:26:21.115304  240322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-557657
	I1205 07:26:21.132704  240322 ssh_runner.go:195] Run: cat /version.json
	I1205 07:26:21.132742  240322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:26:21.132745  240322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-557657
	I1205 07:26:21.132795  240322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-557657
	I1205 07:26:21.158782  240322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/pause-557657/id_rsa Username:docker}
	I1205 07:26:21.160144  240322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/pause-557657/id_rsa Username:docker}
	I1205 07:26:21.256659  240322 ssh_runner.go:195] Run: systemctl --version
	I1205 07:26:21.348325  240322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:26:21.353293  240322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:26:21.353365  240322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:26:21.392282  240322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:26:21.392295  240322 start.go:496] detecting cgroup driver to use...
	I1205 07:26:21.392329  240322 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:26:21.392376  240322 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:26:21.411238  240322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:26:21.424233  240322 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:26:21.424301  240322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:26:21.442051  240322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:26:21.461996  240322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:26:21.582569  240322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:26:21.700022  240322 docker.go:234] disabling docker service ...
	I1205 07:26:21.700081  240322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:26:21.721626  240322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:26:21.735090  240322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:26:21.854333  240322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:26:21.981261  240322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:26:21.995677  240322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:26:22.016894  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:26:22.027497  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:26:22.037889  240322 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:26:22.037950  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:26:22.047464  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:26:22.056998  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:26:22.065845  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:26:22.074993  240322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:26:22.083361  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:26:22.092267  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:26:22.106630  240322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:26:22.118969  240322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:26:22.127250  240322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:26:22.135323  240322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:26:22.251823  240322 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:26:22.377025  240322 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:26:22.377083  240322 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:26:22.380843  240322 start.go:564] Will wait 60s for crictl version
	I1205 07:26:22.380899  240322 ssh_runner.go:195] Run: which crictl
	I1205 07:26:22.384325  240322 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:26:22.409020  240322 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:26:22.409087  240322 ssh_runner.go:195] Run: containerd --version
	I1205 07:26:22.435076  240322 ssh_runner.go:195] Run: containerd --version
	I1205 07:26:22.460199  240322 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.5 ...
	I1205 07:26:22.463202  240322 cli_runner.go:164] Run: docker network inspect pause-557657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:26:22.479149  240322 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:26:22.482965  240322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:26:22.492477  240322 kubeadm.go:884] updating cluster {Name:pause-557657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-557657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:26:22.492591  240322 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 07:26:22.492650  240322 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:26:22.523313  240322 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:26:22.523324  240322 containerd.go:534] Images already preloaded, skipping extraction
	I1205 07:26:22.523382  240322 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:26:22.547711  240322 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:26:22.547723  240322 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:26:22.547730  240322 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1205 07:26:22.547821  240322 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-557657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-557657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:26:22.547877  240322 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:26:22.573286  240322 cni.go:84] Creating CNI manager for ""
	I1205 07:26:22.573298  240322 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:26:22.573312  240322 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:26:22.573332  240322 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-557657 NodeName:pause-557657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:26:22.573442  240322 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "pause-557657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:26:22.573505  240322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:26:22.581129  240322 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:26:22.581210  240322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:26:22.588722  240322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 07:26:22.601721  240322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:26:22.614939  240322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1205 07:26:22.627470  240322 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:26:22.631007  240322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:26:22.640490  240322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:26:22.784537  240322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:26:22.803193  240322 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657 for IP: 192.168.85.2
	I1205 07:26:22.803203  240322 certs.go:195] generating shared ca certs ...
	I1205 07:26:22.803217  240322 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:22.803349  240322 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:26:22.803391  240322 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:26:22.803397  240322 certs.go:257] generating profile certs ...
	I1205 07:26:22.803453  240322 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/client.key
	I1205 07:26:22.803462  240322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/client.crt with IP's: []
	I1205 07:26:23.109221  240322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/client.crt ...
	I1205 07:26:23.109237  240322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/client.crt: {Name:mk8987ae0d45ac09840cc91036f1bbf1ee8c7d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:23.109433  240322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/client.key ...
	I1205 07:26:23.109443  240322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/client.key: {Name:mkca7898a40f1004d3815008d522a144065946d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:23.109540  240322 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.key.e403576e
	I1205 07:26:23.109561  240322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.crt.e403576e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:26:23.535314  240322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.crt.e403576e ...
	I1205 07:26:23.535329  240322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.crt.e403576e: {Name:mkc62495dc28745b3547654070c5dfa033a2bcf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:23.535513  240322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.key.e403576e ...
	I1205 07:26:23.535521  240322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.key.e403576e: {Name:mkd2d2b4a7b2196eb08c2fefd171e4f7a2bd1a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:23.535603  240322 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.crt.e403576e -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.crt
	I1205 07:26:23.535681  240322 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.key.e403576e -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.key
	I1205 07:26:23.535732  240322 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/proxy-client.key
	I1205 07:26:23.535745  240322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/proxy-client.crt with IP's: []
	I1205 07:26:23.631845  240322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/proxy-client.crt ...
	I1205 07:26:23.631860  240322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/proxy-client.crt: {Name:mk3486e335b28a150b04cb2fdc8c60aefc39786d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:23.632032  240322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/proxy-client.key ...
	I1205 07:26:23.632038  240322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/proxy-client.key: {Name:mk7a47109e8b20b7cad78f5531821f8a1f8ff72e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:23.632208  240322 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:26:23.632246  240322 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:26:23.632253  240322 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:26:23.632279  240322 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:26:23.632301  240322 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:26:23.632323  240322 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:26:23.632372  240322 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:26:23.632927  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:26:23.650463  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:26:23.670776  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:26:23.687444  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:26:23.704341  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 07:26:23.726859  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:26:23.744435  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:26:23.762586  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:26:23.780304  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:26:23.798064  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:26:23.815548  240322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:26:23.832605  240322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:26:23.844871  240322 ssh_runner.go:195] Run: openssl version
	I1205 07:26:23.851194  240322 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:26:23.858350  240322 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:26:23.866606  240322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:26:23.870196  240322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:26:23.870250  240322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:26:23.911045  240322 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:26:23.918630  240322 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:26:23.925847  240322 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:26:23.933059  240322 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:26:23.940818  240322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:26:23.944484  240322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:26:23.944557  240322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:26:23.985517  240322 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:26:23.993050  240322 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:26:24.000386  240322 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:26:24.009494  240322 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:26:24.018374  240322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:26:24.022724  240322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:26:24.022783  240322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:26:24.064146  240322 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:26:24.072851  240322 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:26:24.081704  240322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:26:24.085514  240322 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:26:24.085560  240322 kubeadm.go:401] StartCluster: {Name:pause-557657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-557657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:26:24.085624  240322 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:26:24.085682  240322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:26:24.111597  240322 cri.go:89] found id: ""
	I1205 07:26:24.111659  240322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:26:24.121865  240322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:26:24.130445  240322 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:26:24.130508  240322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:26:24.138666  240322 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:26:24.138677  240322 kubeadm.go:158] found existing configuration files:
	
	I1205 07:26:24.138728  240322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:26:24.146897  240322 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:26:24.146953  240322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:26:24.154637  240322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:26:24.162490  240322 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:26:24.162547  240322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:26:24.169806  240322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:26:24.177231  240322 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:26:24.177285  240322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:26:24.184861  240322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:26:24.192341  240322 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:26:24.192398  240322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:26:24.199478  240322 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:26:24.237045  240322 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 07:26:24.237269  240322 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:26:24.261285  240322 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:26:24.261347  240322 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:26:24.261381  240322 kubeadm.go:319] OS: Linux
	I1205 07:26:24.261425  240322 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:26:24.261472  240322 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:26:24.261517  240322 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:26:24.261564  240322 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:26:24.261611  240322 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:26:24.261657  240322 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:26:24.261716  240322 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:26:24.261765  240322 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:26:24.261810  240322 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:26:24.341661  240322 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:26:24.341772  240322 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:26:24.341869  240322 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:26:24.363467  240322 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:26:24.369677  240322 out.go:252]   - Generating certificates and keys ...
	I1205 07:26:24.369759  240322 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:26:24.369824  240322 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:26:25.225421  240322 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:26:25.640068  240322 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:26:26.035424  240322 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:26:26.296766  240322 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:26:27.369693  240322 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:26:27.369819  240322 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost pause-557657] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:26:27.605405  240322 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:26:27.605695  240322 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost pause-557657] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:26:27.892194  240322 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:26:28.114543  240322 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:26:28.423732  240322 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:26:28.423974  240322 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:26:28.820678  240322 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:26:29.569763  240322 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:26:29.892056  240322 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:26:30.317236  240322 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:26:30.725357  240322 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:26:30.726096  240322 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:26:30.728717  240322 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:26:30.732235  240322 out.go:252]   - Booting up control plane ...
	I1205 07:26:30.732327  240322 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:26:30.732402  240322 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:26:30.732468  240322 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:26:30.748695  240322 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:26:30.748802  240322 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:26:30.758141  240322 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:26:30.758476  240322 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:26:30.758724  240322 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:26:30.889810  240322 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:26:30.889925  240322 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:26:31.893561  240322 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004291619s
	I1205 07:26:31.896888  240322 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:26:31.897005  240322 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1205 07:26:31.897184  240322 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:26:31.897267  240322 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 07:26:37.342034  240322 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.444876277s
	I1205 07:26:37.417128  240322 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.519356486s
	I1205 07:26:38.899270  240322 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001822113s
	I1205 07:26:38.933768  240322 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 07:26:38.950494  240322 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 07:26:38.963575  240322 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 07:26:38.963803  240322 kubeadm.go:319] [mark-control-plane] Marking the node pause-557657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 07:26:38.979832  240322 kubeadm.go:319] [bootstrap-token] Using token: zu79lw.6nfxjsbgds24yv0j
	I1205 07:26:38.982784  240322 out.go:252]   - Configuring RBAC rules ...
	I1205 07:26:38.982913  240322 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 07:26:38.988776  240322 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 07:26:38.996872  240322 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 07:26:39.001462  240322 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 07:26:39.007985  240322 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 07:26:39.013196  240322 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 07:26:39.305951  240322 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 07:26:39.731183  240322 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 07:26:40.305911  240322 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 07:26:40.307004  240322 kubeadm.go:319] 
	I1205 07:26:40.307070  240322 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 07:26:40.307074  240322 kubeadm.go:319] 
	I1205 07:26:40.307150  240322 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 07:26:40.307153  240322 kubeadm.go:319] 
	I1205 07:26:40.307182  240322 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 07:26:40.307240  240322 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 07:26:40.307289  240322 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 07:26:40.307292  240322 kubeadm.go:319] 
	I1205 07:26:40.307345  240322 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 07:26:40.307347  240322 kubeadm.go:319] 
	I1205 07:26:40.307394  240322 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 07:26:40.307396  240322 kubeadm.go:319] 
	I1205 07:26:40.307447  240322 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 07:26:40.307521  240322 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 07:26:40.307588  240322 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 07:26:40.307591  240322 kubeadm.go:319] 
	I1205 07:26:40.307674  240322 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 07:26:40.307750  240322 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 07:26:40.307753  240322 kubeadm.go:319] 
	I1205 07:26:40.307836  240322 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zu79lw.6nfxjsbgds24yv0j \
	I1205 07:26:40.307939  240322 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7da661c66adcdc7adc5fd75c1776d7f8fbeafbd1c6f82c89d86db02e1912959c \
	I1205 07:26:40.307958  240322 kubeadm.go:319] 	--control-plane 
	I1205 07:26:40.307961  240322 kubeadm.go:319] 
	I1205 07:26:40.308045  240322 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 07:26:40.308047  240322 kubeadm.go:319] 
	I1205 07:26:40.308129  240322 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zu79lw.6nfxjsbgds24yv0j \
	I1205 07:26:40.308230  240322 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7da661c66adcdc7adc5fd75c1776d7f8fbeafbd1c6f82c89d86db02e1912959c 
	I1205 07:26:40.311712  240322 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 07:26:40.311949  240322 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:26:40.312060  240322 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:26:40.312082  240322 cni.go:84] Creating CNI manager for ""
	I1205 07:26:40.312089  240322 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:26:40.317214  240322 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1205 07:26:40.320084  240322 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 07:26:40.323989  240322 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 07:26:40.323999  240322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 07:26:40.337265  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 07:26:40.629515  240322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 07:26:40.629648  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:40.629739  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-557657 minikube.k8s.io/updated_at=2025_12_05T07_26_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=pause-557657 minikube.k8s.io/primary=true
	I1205 07:26:40.645585  240322 ops.go:34] apiserver oom_adj: -16
	I1205 07:26:40.764898  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:41.265385  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:41.765892  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:42.265976  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:42.765780  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:43.265567  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:43.765413  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:44.265014  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:44.765609  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:45.265443  240322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:26:45.461358  240322 kubeadm.go:1114] duration metric: took 4.831753241s to wait for elevateKubeSystemPrivileges
	I1205 07:26:45.461375  240322 kubeadm.go:403] duration metric: took 21.375821883s to StartCluster
	I1205 07:26:45.461389  240322 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:45.461445  240322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:26:45.462347  240322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:26:45.462526  240322 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:26:45.462617  240322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 07:26:45.462830  240322 config.go:182] Loaded profile config "pause-557657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:26:45.465838  240322 out.go:179] * Verifying Kubernetes components...
	I1205 07:26:45.468693  240322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:26:45.670412  240322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 07:26:45.726182  240322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:26:45.957522  240322 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1205 07:26:45.959001  240322 node_ready.go:35] waiting up to 6m0s for node "pause-557657" to be "Ready" ...
	I1205 07:26:46.463947  240322 kapi.go:214] "coredns" deployment in "kube-system" namespace and "pause-557657" context rescaled to 1 replicas
	W1205 07:26:47.962438  240322 node_ready.go:57] node "pause-557657" has "Ready":"False" status (will retry)
	W1205 07:26:49.962484  240322 node_ready.go:57] node "pause-557657" has "Ready":"False" status (will retry)
	W1205 07:26:52.462171  240322 node_ready.go:57] node "pause-557657" has "Ready":"False" status (will retry)
	W1205 07:26:54.463396  240322 node_ready.go:57] node "pause-557657" has "Ready":"False" status (will retry)
	W1205 07:26:56.963698  240322 node_ready.go:57] node "pause-557657" has "Ready":"False" status (will retry)
	I1205 07:27:01.453527  201585 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000160045s
	I1205 07:27:01.453563  201585 kubeadm.go:319] 
	I1205 07:27:01.453643  201585 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:27:01.453688  201585 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:27:01.453827  201585 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:27:01.453843  201585 kubeadm.go:319] 
	I1205 07:27:01.453960  201585 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:27:01.454006  201585 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:27:01.454039  201585 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:27:01.454046  201585 kubeadm.go:319] 
	I1205 07:27:01.458588  201585 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:27:01.459097  201585 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:27:01.459219  201585 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:27:01.459499  201585 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:27:01.459508  201585 kubeadm.go:319] 
	I1205 07:27:01.459588  201585 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:27:01.459652  201585 kubeadm.go:403] duration metric: took 12m9.799121584s to StartCluster
	I1205 07:27:01.459690  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:27:01.459773  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:27:01.487360  201585 cri.go:89] found id: ""
	I1205 07:27:01.487395  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.487404  201585 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:27:01.487411  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:27:01.487473  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:27:01.513700  201585 cri.go:89] found id: ""
	I1205 07:27:01.513731  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.513739  201585 logs.go:284] No container was found matching "etcd"
	I1205 07:27:01.513746  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:27:01.513806  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:27:01.538290  201585 cri.go:89] found id: ""
	I1205 07:27:01.538316  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.538325  201585 logs.go:284] No container was found matching "coredns"
	I1205 07:27:01.538332  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:27:01.538389  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:27:01.566506  201585 cri.go:89] found id: ""
	I1205 07:27:01.566527  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.566535  201585 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:27:01.566542  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:27:01.566599  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:27:01.592455  201585 cri.go:89] found id: ""
	I1205 07:27:01.592480  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.592489  201585 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:27:01.592495  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:27:01.592555  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:27:01.619653  201585 cri.go:89] found id: ""
	I1205 07:27:01.619679  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.619687  201585 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:27:01.619695  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:27:01.619751  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:27:01.645975  201585 cri.go:89] found id: ""
	I1205 07:27:01.645999  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.646007  201585 logs.go:284] No container was found matching "kindnet"
	I1205 07:27:01.646015  201585 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:27:01.646110  201585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:27:01.673331  201585 cri.go:89] found id: ""
	I1205 07:27:01.673357  201585 logs.go:282] 0 containers: []
	W1205 07:27:01.673366  201585 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:27:01.673377  201585 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:27:01.673388  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:27:01.742506  201585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:27:01.742524  201585 logs.go:123] Gathering logs for containerd ...
	I1205 07:27:01.742537  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:27:01.783198  201585 logs.go:123] Gathering logs for container status ...
	I1205 07:27:01.783231  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:27:01.820852  201585 logs.go:123] Gathering logs for kubelet ...
	I1205 07:27:01.820877  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:27:01.887749  201585 logs.go:123] Gathering logs for dmesg ...
	I1205 07:27:01.887790  201585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:27:01.903169  201585 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000160045s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:27:01.903245  201585 out.go:285] * 
	W1205 07:27:01.903299  201585 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000160045s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:27:01.903316  201585 out.go:285] * 
	W1205 07:27:01.905628  201585 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:27:01.910909  201585 out.go:203] 
	W1205 07:27:01.914875  201585 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000160045s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:27:01.914927  201585 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:27:01.914948  201585 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:27:01.918208  201585 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.651527964Z" level=info msg="StopPodSandbox for \"ae9ca2356f2e54b7e57744ef3cfc74c89a80940481ae16c9c1f67822676f2ce3\" returns successfully"
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.652066948Z" level=info msg="RemovePodSandbox for \"ae9ca2356f2e54b7e57744ef3cfc74c89a80940481ae16c9c1f67822676f2ce3\""
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.652169628Z" level=info msg="Forcibly stopping sandbox \"ae9ca2356f2e54b7e57744ef3cfc74c89a80940481ae16c9c1f67822676f2ce3\""
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.652215799Z" level=info msg="Container to stop \"7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.652562083Z" level=info msg="TearDown network for sandbox \"ae9ca2356f2e54b7e57744ef3cfc74c89a80940481ae16c9c1f67822676f2ce3\" successfully"
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.662366842Z" level=info msg="Ensure that sandbox ae9ca2356f2e54b7e57744ef3cfc74c89a80940481ae16c9c1f67822676f2ce3 in task-service has been cleanup successfully"
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.668387194Z" level=info msg="RemovePodSandbox \"ae9ca2356f2e54b7e57744ef3cfc74c89a80940481ae16c9c1f67822676f2ce3\" returns successfully"
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.668875265Z" level=info msg="StopPodSandbox for \"3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669\""
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.668942424Z" level=info msg="Container to stop \"86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.669355893Z" level=info msg="TearDown network for sandbox \"3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669\" successfully"
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.669404279Z" level=info msg="StopPodSandbox for \"3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669\" returns successfully"
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.669740521Z" level=info msg="RemovePodSandbox for \"3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669\""
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.669892916Z" level=info msg="Forcibly stopping sandbox \"3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669\""
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.669931456Z" level=info msg="Container to stop \"86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.670300904Z" level=info msg="TearDown network for sandbox \"3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669\" successfully"
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.678089733Z" level=info msg="Ensure that sandbox 3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669 in task-service has been cleanup successfully"
	Dec 05 07:18:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:18:56.684228232Z" level=info msg="RemovePodSandbox \"3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669\" returns successfully"
	Dec 05 07:23:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:23:56.628799435Z" level=info msg="container event discarded" container=d6a3dde6f899b44ab5ba520893e4a1961592235e699c85ecfcd88a18158bca9f type=CONTAINER_DELETED_EVENT
	Dec 05 07:23:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:23:56.644130574Z" level=info msg="container event discarded" container=fa5edbb472852d0d445e5dd827387d612f8cc62f21a41e7c40d76b48f9db5105 type=CONTAINER_DELETED_EVENT
	Dec 05 07:23:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:23:56.654341532Z" level=info msg="container event discarded" container=907ab7649758d433fa67627692dab4813f4d6bf32c758bcca8a449f2e8865cd2 type=CONTAINER_DELETED_EVENT
	Dec 05 07:23:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:23:56.654402185Z" level=info msg="container event discarded" container=96084d4193f91393bc4b83871b9c81ebaefa2c3f5bbf33c19d73440c08962505 type=CONTAINER_DELETED_EVENT
	Dec 05 07:23:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:23:56.671820476Z" level=info msg="container event discarded" container=7ee5e95f10629e0d8b2284c669ae2e9688a5bd205fe2061d010e62a1380b2e96 type=CONTAINER_DELETED_EVENT
	Dec 05 07:23:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:23:56.671891393Z" level=info msg="container event discarded" container=ae9ca2356f2e54b7e57744ef3cfc74c89a80940481ae16c9c1f67822676f2ce3 type=CONTAINER_DELETED_EVENT
	Dec 05 07:23:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:23:56.688325970Z" level=info msg="container event discarded" container=86071f99e9b5cf2f99bab0a935d91bb940dd3aa8914096157ac06a43d19e8dae type=CONTAINER_DELETED_EVENT
	Dec 05 07:23:56 kubernetes-upgrade-496233 containerd[555]: time="2025-12-05T07:23:56.688387460Z" level=info msg="container event discarded" container=3352d88573358e9b3de28cb898291f01c1c4bf5ca376b1215fa2af54f5b48669 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:27:03 up  2:09,  0 user,  load average: 2.21, 1.56, 1.70
	Linux kubernetes-upgrade-496233 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:27:00 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:27:01 kubernetes-upgrade-496233 kubelet[14474]: E1205 07:27:01.145077   14474 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:27:01 kubernetes-upgrade-496233 kubelet[14568]: E1205 07:27:01.953982   14568 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:27:01 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:27:02 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 07:27:02 kubernetes-upgrade-496233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:27:02 kubernetes-upgrade-496233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:27:02 kubernetes-upgrade-496233 kubelet[14578]: E1205 07:27:02.657657   14578 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:27:02 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:27:02 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:27:03 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 07:27:03 kubernetes-upgrade-496233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:27:03 kubernetes-upgrade-496233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:27:03 kubernetes-upgrade-496233 kubelet[14632]: E1205 07:27:03.405587   14632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:27:03 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:27:03 kubernetes-upgrade-496233 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-496233 -n kubernetes-upgrade-496233
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-496233 -n kubernetes-upgrade-496233: exit status 2 (359.659244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-496233" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-496233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-496233
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-496233: (2.207968516s)
--- FAIL: TestKubernetesUpgrade (789.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (510.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m29.047997529s)

                                                
                                                
-- stdout --
	* [no-preload-241270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-241270" primary control-plane node in "no-preload-241270" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:34:51.139776  281419 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:34:51.140073  281419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:51.140119  281419 out.go:374] Setting ErrFile to fd 2...
	I1205 07:34:51.140144  281419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:51.140549  281419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:34:51.141273  281419 out.go:368] Setting JSON to false
	I1205 07:34:51.142649  281419 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8238,"bootTime":1764911853,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:34:51.142766  281419 start.go:143] virtualization:  
	I1205 07:34:51.148202  281419 out.go:179] * [no-preload-241270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:34:51.152726  281419 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:34:51.152903  281419 notify.go:221] Checking for updates...
	I1205 07:34:51.159176  281419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:34:51.162409  281419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:34:51.165463  281419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:34:51.168480  281419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:34:51.171387  281419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:34:51.174937  281419 config.go:182] Loaded profile config "embed-certs-861489": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:34:51.175074  281419 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:34:51.213268  281419 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:34:51.213625  281419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:51.340815  281419 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:58 SystemTime:2025-12-05 07:34:51.325746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:51.340963  281419 docker.go:319] overlay module found
	I1205 07:34:51.344513  281419 out.go:179] * Using the docker driver based on user configuration
	I1205 07:34:51.347411  281419 start.go:309] selected driver: docker
	I1205 07:34:51.347445  281419 start.go:927] validating driver "docker" against <nil>
	I1205 07:34:51.347462  281419 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:34:51.348356  281419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:51.437556  281419 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:58 SystemTime:2025-12-05 07:34:51.424478119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:51.437706  281419 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:34:51.437931  281419 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:34:51.441004  281419 out.go:179] * Using Docker driver with root privileges
	I1205 07:34:51.443747  281419 cni.go:84] Creating CNI manager for ""
	I1205 07:34:51.443810  281419 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:34:51.443819  281419 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:34:51.443904  281419 start.go:353] cluster config:
	{Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:34:51.446991  281419 out.go:179] * Starting "no-preload-241270" primary control-plane node in "no-preload-241270" cluster
	I1205 07:34:51.449614  281419 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:34:51.452524  281419 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:34:51.455242  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:51.455379  281419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:34:51.455417  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json: {Name:mk8fc06bb3887c20c8fe3b5251e5d5a01b5343f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:51.455591  281419 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:34:51.455834  281419 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.455890  281419 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:34:51.455897  281419 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.395µs
	I1205 07:34:51.455908  281419 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:34:51.455918  281419 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.455946  281419 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:34:51.455951  281419 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 34.634µs
	I1205 07:34:51.455956  281419 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:34:51.455970  281419 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.455996  281419 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:34:51.456000  281419 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 31.18µs
	I1205 07:34:51.456006  281419 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:34:51.456016  281419 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.456042  281419 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:34:51.456046  281419 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.795µs
	I1205 07:34:51.456057  281419 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:34:51.456070  281419 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.456098  281419 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:34:51.456103  281419 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 38.236µs
	I1205 07:34:51.456108  281419 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:34:51.456116  281419 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.456140  281419 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:34:51.456144  281419 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 29.391µs
	I1205 07:34:51.456149  281419 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:34:51.456157  281419 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.456182  281419 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:34:51.456186  281419 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.9µs
	I1205 07:34:51.456191  281419 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:34:51.456199  281419 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.456226  281419 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:34:51.456230  281419 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.09µs
	I1205 07:34:51.456236  281419 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:34:51.456241  281419 cache.go:87] Successfully saved all images to host disk.
	I1205 07:34:51.476465  281419 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:34:51.476489  281419 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:34:51.476505  281419 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:34:51.476536  281419 start.go:360] acquireMachinesLock for no-preload-241270: {Name:mk38da592769bcf9f80cfe38cf457b769a394afe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:51.476637  281419 start.go:364] duration metric: took 87.008µs to acquireMachinesLock for "no-preload-241270"
	I1205 07:34:51.476663  281419 start.go:93] Provisioning new machine with config: &{Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:34:51.476728  281419 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:34:51.482132  281419 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:51.482359  281419 start.go:159] libmachine.API.Create for "no-preload-241270" (driver="docker")
	I1205 07:34:51.482388  281419 client.go:173] LocalClient.Create starting
	I1205 07:34:51.482463  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:51.482494  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482510  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482565  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:51.482581  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482597  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482961  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:51.498656  281419 cli_runner.go:211] docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:51.498737  281419 network_create.go:284] running [docker network inspect no-preload-241270] to gather additional debugging logs...
	I1205 07:34:51.498754  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270
	W1205 07:34:51.515396  281419 cli_runner.go:211] docker network inspect no-preload-241270 returned with exit code 1
	I1205 07:34:51.515424  281419 network_create.go:287] error running [docker network inspect no-preload-241270]: docker network inspect no-preload-241270: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-241270 not found
	I1205 07:34:51.515453  281419 network_create.go:289] output of [docker network inspect no-preload-241270]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-241270 not found
	
	** /stderr **
	I1205 07:34:51.515547  281419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:51.540706  281419 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:51.541027  281419 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:51.541392  281419 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:51.541780  281419 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3e30}
	I1205 07:34:51.541797  281419 network_create.go:124] attempt to create docker network no-preload-241270 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1205 07:34:51.541855  281419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-241270 no-preload-241270
	I1205 07:34:51.644579  281419 network_create.go:108] docker network no-preload-241270 192.168.76.0/24 created
	I1205 07:34:51.644609  281419 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-241270" container
	I1205 07:34:51.644693  281419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:51.664403  281419 cli_runner.go:164] Run: docker volume create no-preload-241270 --label name.minikube.sigs.k8s.io=no-preload-241270 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:51.703596  281419 oci.go:103] Successfully created a docker volume no-preload-241270
	I1205 07:34:51.703699  281419 cli_runner.go:164] Run: docker run --rm --name no-preload-241270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --entrypoint /usr/bin/test -v no-preload-241270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:52.419093  281419 oci.go:107] Successfully prepared a docker volume no-preload-241270
	I1205 07:34:52.419152  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:52.419281  281419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:52.419402  281419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:52.474323  281419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-241270 --name no-preload-241270 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-241270 --network no-preload-241270 --ip 192.168.76.2 --volume no-preload-241270:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:52.844284  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Running}}
	I1205 07:34:52.871353  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:52.893044  281419 cli_runner.go:164] Run: docker exec no-preload-241270 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:52.971944  281419 oci.go:144] the created container "no-preload-241270" has a running status.
	I1205 07:34:52.971975  281419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa...
	I1205 07:34:53.768668  281419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:53.945530  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:53.965986  281419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:53.966005  281419 kic_runner.go:114] Args: [docker exec --privileged no-preload-241270 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:54.059371  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:54.108271  281419 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:54.108367  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.132985  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.133345  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.133356  281419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:54.333364  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.333388  281419 ubuntu.go:182] provisioning hostname "no-preload-241270"
	I1205 07:34:54.333541  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.369719  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.371863  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.371893  281419 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-241270 && echo "no-preload-241270" | sudo tee /etc/hostname
	I1205 07:34:54.574524  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.574606  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.599195  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.599492  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.599509  281419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241270/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:34:54.776549  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:34:54.776662  281419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:34:54.776695  281419 ubuntu.go:190] setting up certificates
	I1205 07:34:54.776705  281419 provision.go:84] configureAuth start
	I1205 07:34:54.776772  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:54.802455  281419 provision.go:143] copyHostCerts
	I1205 07:34:54.802525  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:34:54.802534  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:34:54.802614  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:34:54.802700  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:34:54.802706  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:34:54.802735  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:34:54.802784  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:34:54.802797  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:34:54.802821  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:34:54.802868  281419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.no-preload-241270 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-241270]
	I1205 07:34:55.021879  281419 provision.go:177] copyRemoteCerts
	I1205 07:34:55.021961  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:34:55.022007  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.042198  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.146207  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:34:55.175055  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:34:55.196310  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:34:55.228238  281419 provision.go:87] duration metric: took 451.519136ms to configureAuth
	I1205 07:34:55.228267  281419 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:34:55.228447  281419 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:55.228461  281419 machine.go:97] duration metric: took 1.120172831s to provisionDockerMachine
	I1205 07:34:55.228468  281419 client.go:176] duration metric: took 3.746074827s to LocalClient.Create
	I1205 07:34:55.228481  281419 start.go:167] duration metric: took 3.746124256s to libmachine.API.Create "no-preload-241270"
	I1205 07:34:55.228492  281419 start.go:293] postStartSetup for "no-preload-241270" (driver="docker")
	I1205 07:34:55.228503  281419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:34:55.228562  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:34:55.228610  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.249980  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.367085  281419 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:34:55.370694  281419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:34:55.370723  281419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:34:55.370734  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:34:55.370886  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:34:55.371031  281419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:34:55.371195  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:34:55.385389  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:34:55.415204  281419 start.go:296] duration metric: took 186.696466ms for postStartSetup
	I1205 07:34:55.415546  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.445124  281419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:34:55.445421  281419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:34:55.445469  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.465824  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.582588  281419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:34:55.589753  281419 start.go:128] duration metric: took 4.113009855s to createHost
	I1205 07:34:55.589783  281419 start.go:83] releasing machines lock for "no-preload-241270", held for 4.11313674s
	I1205 07:34:55.589860  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.609280  281419 ssh_runner.go:195] Run: cat /version.json
	I1205 07:34:55.609334  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.609553  281419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:34:55.609603  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.653271  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.667026  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.785816  281419 ssh_runner.go:195] Run: systemctl --version
	I1205 07:34:55.905848  281419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:34:55.913263  281419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:34:55.913352  281419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:34:55.955688  281419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:34:55.955713  281419 start.go:496] detecting cgroup driver to use...
	I1205 07:34:55.955752  281419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:34:55.955807  281419 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:34:55.978957  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:34:55.992668  281419 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:34:55.992774  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:34:56.017505  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:34:56.046827  281419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:34:56.209514  281419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:34:56.405533  281419 docker.go:234] disabling docker service ...
	I1205 07:34:56.405600  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:34:56.470263  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:34:56.503296  281419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:34:56.815584  281419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:34:57.031532  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:34:57.059667  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:34:57.093975  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:34:57.103230  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:34:57.112469  281419 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:34:57.112537  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:34:57.123144  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.134066  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:34:57.144317  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.156950  281419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:34:57.168939  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:34:57.179688  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:34:57.190637  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:34:57.206793  281419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:34:57.215781  281419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:34:57.226983  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:34:57.420977  281419 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:34:57.514033  281419 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:34:57.514159  281419 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:34:57.519057  281419 start.go:564] Will wait 60s for crictl version
	I1205 07:34:57.519141  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:57.523352  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:34:57.554146  281419 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:34:57.554218  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.577679  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.608177  281419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:34:57.611218  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:57.631313  281419 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:34:57.635595  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:34:57.647819  281419 kubeadm.go:884] updating cluster {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:34:57.647943  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:57.648012  281419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:34:57.675975  281419 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:34:57.675998  281419 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:34:57.676035  281419 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.676242  281419 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.676321  281419 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.676541  281419 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.676664  281419 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.676744  281419 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.676821  281419 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.677443  281419 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.678747  281419 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.679204  281419 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.679446  281419 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.679490  281419 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.679628  281419 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.679730  281419 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.680191  281419 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.680226  281419 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.993134  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:34:57.993255  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:34:58.022857  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:34:58.022958  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.035702  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:34:58.035816  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.068460  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:34:58.068586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.069026  281419 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:34:58.069090  281419 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:34:58.069183  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.069262  281419 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:34:58.069305  281419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.069349  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.074525  281419 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:34:58.074618  281419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.074694  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.084602  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:34:58.084753  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.093856  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:34:58.093981  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.103085  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.103156  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.103215  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.103214  281419 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:34:58.103271  281419 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.103296  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.115763  281419 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:34:58.115803  281419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.115854  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.116104  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:34:58.116140  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.154653  281419 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:34:58.154740  281419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.154818  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192178  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.192267  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.192272  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.192322  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.192364  281419 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:34:58.192395  281419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.192421  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192479  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.192482  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278470  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.278568  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.278766  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.278598  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.278641  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278681  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.278865  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387623  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387705  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.387774  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:34:58.387840  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.387886  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.387626  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387984  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.388070  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.387990  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387931  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.453644  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.453792  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:34:58.453804  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:34:58.453889  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 07:34:58.453762  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:34:58.453990  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.454049  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:34:58.454052  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:34:58.453951  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.453861  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:34:58.454295  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:34:58.453742  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.454372  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.542254  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.542568  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:34:58.542480  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:34:58.542630  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:34:58.542522  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542738  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:34:58.542768  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:34:58.578716  281419 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.578827  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.610540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.610912  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:34:58.888566  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:34:59.021211  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:59.021289  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1205 07:34:59.068346  281419 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:34:59.068498  281419 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:34:59.068572  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864558  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.795954788s)
	I1205 07:35:00.864602  281419 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:00.864631  281419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864683  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:35:00.864739  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.843433798s)
	I1205 07:35:00.864752  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:00.864766  281419 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.864805  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.873580  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:02.712615  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.847781055s)
	I1205 07:35:02.712638  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:02.712660  281419 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712732  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712799  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.839195579s)
	I1205 07:35:02.712834  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087126  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.374270081s)
	I1205 07:35:04.087198  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087256  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.374512799s)
	I1205 07:35:04.087266  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:04.087283  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.087309  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:05.800879  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.713547867s)
	I1205 07:35:05.800904  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:05.800922  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.800970  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.801018  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.713803361s)
	I1205 07:35:05.801061  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:05.801141  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:07.217489  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.416494396s)
	I1205 07:35:07.217512  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:07.217529  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217647  281419 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.416497334s)
	I1205 07:35:07.217660  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:07.217673  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:08.607664  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.390055936s)
	I1205 07:35:08.607697  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:08.607718  281419 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.607767  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:09.100321  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:09.100358  281419 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:09.100365  281419 cache_images.go:94] duration metric: took 11.42435306s to LoadCachedImages
	I1205 07:35:09.100377  281419 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:09.100482  281419 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-241270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:09.100558  281419 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:09.129301  281419 cni.go:84] Creating CNI manager for ""
	I1205 07:35:09.129326  281419 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:09.129345  281419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:35:09.129377  281419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241270 NodeName:no-preload-241270 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:09.129497  281419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-241270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:09.129569  281419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.142095  281419 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:09.142170  281419 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.156065  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:09.156176  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:09.156262  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:09.156299  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:09.156377  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:09.156425  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:09.179830  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:09.179870  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:09.179956  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:09.179975  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:09.180072  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:09.198397  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:09.198485  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:10.286113  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:10.299161  281419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:10.316251  281419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:10.331159  281419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 07:35:10.345735  281419 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:10.350335  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:10.363402  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:10.512811  281419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:10.529558  281419 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270 for IP: 192.168.76.2
	I1205 07:35:10.529629  281419 certs.go:195] generating shared ca certs ...
	I1205 07:35:10.529657  281419 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.529834  281419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:10.529923  281419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:10.529958  281419 certs.go:257] generating profile certs ...
	I1205 07:35:10.530038  281419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key
	I1205 07:35:10.530076  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt with IP's: []
	I1205 07:35:10.853605  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt ...
	I1205 07:35:10.853638  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt: {Name:mk2a843840c6e4a2de14fc26103351bbaff83f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.854971  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key ...
	I1205 07:35:10.854994  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key: {Name:mk2141bc22495cb299c026ddfd70c2cab1c5df09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.855117  281419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330
	I1205 07:35:10.855143  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1205 07:35:11.172976  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 ...
	I1205 07:35:11.173007  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330: {Name:mk727b4727c68f439905180851e5f305719107ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.173862  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 ...
	I1205 07:35:11.173894  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330: {Name:mk05e994b799e7321fe9fd9419571307eec1a124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.174674  281419 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt
	I1205 07:35:11.174770  281419 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key
	I1205 07:35:11.174852  281419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key
	I1205 07:35:11.174872  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt with IP's: []
	I1205 07:35:11.350910  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt ...
	I1205 07:35:11.350948  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt: {Name:mk7c9be3a839b00f099d02f39817919630f828cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.352352  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key ...
	I1205 07:35:11.352386  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key: {Name:mkf516ee46be6e2698cf5a62147058f957abc08a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.353684  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:11.353744  281419 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:11.353758  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:11.353787  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:11.353817  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:11.353849  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:11.353898  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:11.354490  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:11.381382  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:11.406241  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:11.428183  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:11.450978  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:11.476407  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:11.498851  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:11.519352  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:11.539765  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:11.559484  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:11.579911  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:11.600685  281419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:11.616084  281419 ssh_runner.go:195] Run: openssl version
	I1205 07:35:11.625728  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.635065  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:11.645233  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651040  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651153  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.693810  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.702555  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.710996  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.719477  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:11.727857  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732743  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732862  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.774767  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:11.783345  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:11.791961  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.801063  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:11.809888  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.814918  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.815034  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.857224  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:11.866093  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:11.874706  281419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:11.879598  281419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:11.879697  281419 kubeadm.go:401] StartCluster: {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:11.879803  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:11.879898  281419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:11.908036  281419 cri.go:89] found id: ""
	I1205 07:35:11.908156  281419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:11.919349  281419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:11.928155  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:11.928267  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:11.939709  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:11.939779  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:11.939856  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:11.949257  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:11.949365  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:11.957760  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:11.967055  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:11.967163  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:11.975295  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.984686  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:11.984797  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.994202  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:12.005520  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:12.005606  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:12.026031  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:12.083192  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:35:12.083309  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:35:12.193051  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:35:12.193150  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:35:12.193215  281419 kubeadm.go:319] OS: Linux
	I1205 07:35:12.193261  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:35:12.193313  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:35:12.193374  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:35:12.193426  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:35:12.193479  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:35:12.193529  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:35:12.193578  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:35:12.193684  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:35:12.193786  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:35:12.268365  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:35:12.268486  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:35:12.268582  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:35:12.276338  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:35:12.281185  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:35:12.281356  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:35:12.281459  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:35:12.381667  281419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:35:12.863385  281419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:35:13.114787  281419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:35:13.312565  281419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:35:13.794303  281419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:35:13.794935  281419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.299804  281419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:35:14.300371  281419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.449360  281419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:35:14.671722  281419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:35:15.172052  281419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:35:15.174002  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:35:15.463292  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:35:16.096919  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:35:16.336520  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:35:16.828502  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:35:17.109506  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:35:17.109613  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:35:17.109687  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:35:17.113932  281419 out.go:252]   - Booting up control plane ...
	I1205 07:35:17.114055  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:35:17.130916  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:35:17.131000  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:35:17.144923  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:35:17.145031  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:35:17.153033  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:35:17.153136  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:35:17.153238  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:35:17.320155  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:35:17.320276  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:17.318333  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000477824s
	I1205 07:39:17.318360  281419 kubeadm.go:319] 
	I1205 07:39:17.318428  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:17.318462  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:17.318567  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:17.318571  281419 kubeadm.go:319] 
	I1205 07:39:17.318675  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:17.318708  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:17.318739  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:39:17.318744  281419 kubeadm.go:319] 
	I1205 07:39:17.323674  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:39:17.324139  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:39:17.324260  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:39:17.324546  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:39:17.324556  281419 kubeadm.go:319] 
	I1205 07:39:17.324629  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 07:39:17.324734  281419 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000477824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000477824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:17.324832  281419 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:17.734892  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:17.749336  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:17.749399  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:17.757730  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:17.757790  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:17.757850  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:17.766487  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:17.766564  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:17.774523  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:17.782748  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:17.782816  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:17.790744  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.798734  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:17.798821  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.806627  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:17.814519  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:17.814588  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:17.822487  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:17.863307  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:17.863481  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:17.933763  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:17.933840  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:17.933891  281419 kubeadm.go:319] OS: Linux
	I1205 07:39:17.933940  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:17.933992  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:17.934041  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:17.934092  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:17.934143  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:17.934200  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:17.934250  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:17.934300  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:17.934350  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:18.005121  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:18.005386  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:18.005505  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:18.013422  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:18.015372  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:18.015478  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:18.015552  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:18.015718  281419 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:18.016366  281419 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:18.016626  281419 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:18.017069  281419 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:18.017546  281419 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:18.017846  281419 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:18.018157  281419 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:18.018500  281419 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:18.018795  281419 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:18.018893  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:18.103696  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:18.482070  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:18.757043  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:18.907937  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:19.448057  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:19.448772  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:19.451764  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:19.453331  281419 out.go:252]   - Booting up control plane ...
	I1205 07:39:19.453502  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:19.453624  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:19.454383  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:19.477703  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:19.478043  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:19.486387  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:19.486517  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:19.486561  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:19.636438  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:19.636619  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:43:19.629743  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000979602s
	I1205 07:43:19.629776  281419 kubeadm.go:319] 
	I1205 07:43:19.629841  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:19.629881  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:19.629992  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:19.630000  281419 kubeadm.go:319] 
	I1205 07:43:19.630105  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:19.630141  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:19.630176  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:19.630185  281419 kubeadm.go:319] 
	I1205 07:43:19.633703  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:19.634129  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:19.634243  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:19.634512  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:19.634521  281419 kubeadm.go:319] 
	I1205 07:43:19.634601  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:19.634654  281419 kubeadm.go:403] duration metric: took 8m7.754963643s to StartCluster
	I1205 07:43:19.634689  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:19.634770  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:19.664154  281419 cri.go:89] found id: ""
	I1205 07:43:19.664178  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.664186  281419 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:19.664194  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:19.664259  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:19.688943  281419 cri.go:89] found id: ""
	I1205 07:43:19.689027  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.689051  281419 logs.go:284] No container was found matching "etcd"
	I1205 07:43:19.689071  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:19.689145  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:19.714243  281419 cri.go:89] found id: ""
	I1205 07:43:19.714266  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.714278  281419 logs.go:284] No container was found matching "coredns"
	I1205 07:43:19.714285  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:19.714344  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:19.739300  281419 cri.go:89] found id: ""
	I1205 07:43:19.739326  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.739334  281419 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:19.739341  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:19.739409  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:19.764133  281419 cri.go:89] found id: ""
	I1205 07:43:19.764158  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.764168  281419 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:19.764174  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:19.764233  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:19.791591  281419 cri.go:89] found id: ""
	I1205 07:43:19.791655  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.791670  281419 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:19.791678  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:19.791736  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:19.817073  281419 cri.go:89] found id: ""
	I1205 07:43:19.817096  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.817104  281419 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:19.817113  281419 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:19.817124  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:19.884361  281419 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:19.886664  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:43:19.933532  281419 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:19.933565  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:20.000746  281419 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:20.000782  281419 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:20.000794  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:20.048127  281419 logs.go:123] Gathering logs for container status ...
	I1205 07:43:20.048164  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:43:20.079198  281419 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:20.079257  281419 out.go:285] * 
	* 
	W1205 07:43:20.079339  281419 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.079395  281419 out.go:285] * 
	* 
	W1205 07:43:20.081583  281419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:20.084896  281419 out.go:203] 
	W1205 07:43:20.086596  281419 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.086704  281419 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:20.086780  281419 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:20.088336  281419 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-241270
helpers_test.go:243: (dbg) docker inspect no-preload-241270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	        "Created": "2025-12-05T07:34:52.488952391Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281858,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:34:52.549450094Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hosts",
	        "LogPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896-json.log",
	        "Name": "/no-preload-241270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	                "LowerDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-241270",
	                "Source": "/var/lib/docker/volumes/no-preload-241270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241270",
	                "name.minikube.sigs.k8s.io": "no-preload-241270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eef7bdd89ca732078c94f4927e3c7a21319eafbef30f0346d5566202053e4aac",
	            "SandboxKey": "/var/run/docker/netns/eef7bdd89ca7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:e5:39:6f:c0:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "509cbc0434c71e77097af60a2b0ce9a4473551172a41d0f484ec4e134db3ab73",
	                    "EndpointID": "3e81b46f5657325d06de99919670a1c40d711f2851cee0f84aa291f2a1c6cc3d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241270",
	                        "419e4a267ba5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270: exit status 6 (339.627653ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:43:20.520044  293955 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241270 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-943366                                                                                                                                                                                                                                  │ old-k8s-version-943366       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:31 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ delete  │ -p cert-expiration-379442                                                                                                                                                                                                                                  │ cert-expiration-379442       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:31 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-083143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p default-k8s-diff-port-083143 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-861489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p embed-certs-861489 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:34:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:34:54.564320  282781 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:34:54.564546  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564575  282781 out.go:374] Setting ErrFile to fd 2...
	I1205 07:34:54.564598  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564902  282781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:34:54.565440  282781 out.go:368] Setting JSON to false
	I1205 07:34:54.566401  282781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8241,"bootTime":1764911853,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:34:54.566509  282781 start.go:143] virtualization:  
	I1205 07:34:54.570672  282781 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:34:54.575010  282781 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:34:54.575073  282781 notify.go:221] Checking for updates...
	I1205 07:34:54.579441  282781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:34:54.582467  282781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:34:54.587377  282781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:34:54.590331  282781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:34:54.593234  282781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:34:54.596734  282781 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:54.596829  282781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:34:54.638746  282781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:34:54.638881  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.723110  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-05 07:34:54.71373112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.723208  282781 docker.go:319] overlay module found
	I1205 07:34:54.726530  282781 out.go:179] * Using the docker driver based on user configuration
	I1205 07:34:54.729826  282781 start.go:309] selected driver: docker
	I1205 07:34:54.729851  282781 start.go:927] validating driver "docker" against <nil>
	I1205 07:34:54.729865  282781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:34:54.730603  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.814061  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:34:54.80392623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.814216  282781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1205 07:34:54.814233  282781 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 07:34:54.814448  282781 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:34:54.817656  282781 out.go:179] * Using Docker driver with root privileges
	I1205 07:34:54.820449  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:34:54.820517  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:34:54.820533  282781 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:34:54.820632  282781 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:34:54.823652  282781 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:34:54.826400  282781 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:34:54.829321  282781 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:34:54.832159  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:54.832346  282781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:34:54.866220  282781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:34:54.866240  282781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:34:54.905418  282781 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:34:55.127272  282781 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:34:55.127472  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:34:55.127510  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json: {Name:mk199da181ecffa13d15cfa2c7c654b0a370d7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:55.127517  282781 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127770  282781 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127814  282781 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127984  282781 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128114  282781 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128248  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:34:55.128265  282781 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 153.635µs
	I1205 07:34:55.128280  282781 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128249  282781 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128370  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:34:55.128400  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:34:55.128415  282781 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 907.013µs
	I1205 07:34:55.128428  282781 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:34:55.128407  282781 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 179.719µs
	I1205 07:34:55.128464  282781 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128383  282781 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:34:55.128510  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:34:55.128522  282781 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 71.566µs
	I1205 07:34:55.128528  282781 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:34:55.128441  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:34:55.128638  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:34:55.128687  282781 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 705.903µs
	I1205 07:34:55.128729  282781 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:34:55.128474  282781 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:34:55.128644  282781 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 879.419µs
	I1205 07:34:55.128808  282781 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128298  282781 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128601  282781 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128935  282781 start.go:364] duration metric: took 65.568µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:34:55.128666  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:34:55.128988  282781 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 1.179238ms
	I1205 07:34:55.129009  282781 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128849  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:34:55.129040  282781 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 743.557µs
	I1205 07:34:55.129066  282781 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:34:55.129099  282781 cache.go:87] Successfully saved all images to host disk.
	I1205 07:34:55.128980  282781 start.go:93] Provisioning new machine with config: &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:34:55.129144  282781 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:34:51.482132  281419 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:51.482359  281419 start.go:159] libmachine.API.Create for "no-preload-241270" (driver="docker")
	I1205 07:34:51.482388  281419 client.go:173] LocalClient.Create starting
	I1205 07:34:51.482463  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:51.482494  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482510  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482565  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:51.482581  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482597  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482961  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:51.498656  281419 cli_runner.go:211] docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:51.498737  281419 network_create.go:284] running [docker network inspect no-preload-241270] to gather additional debugging logs...
	I1205 07:34:51.498754  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270
	W1205 07:34:51.515396  281419 cli_runner.go:211] docker network inspect no-preload-241270 returned with exit code 1
	I1205 07:34:51.515424  281419 network_create.go:287] error running [docker network inspect no-preload-241270]: docker network inspect no-preload-241270: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-241270 not found
	I1205 07:34:51.515453  281419 network_create.go:289] output of [docker network inspect no-preload-241270]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-241270 not found
	
	** /stderr **
	I1205 07:34:51.515547  281419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:51.540706  281419 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:51.541027  281419 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:51.541392  281419 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:51.541780  281419 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3e30}
	I1205 07:34:51.541797  281419 network_create.go:124] attempt to create docker network no-preload-241270 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1205 07:34:51.541855  281419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-241270 no-preload-241270
	I1205 07:34:51.644579  281419 network_create.go:108] docker network no-preload-241270 192.168.76.0/24 created
	I1205 07:34:51.644609  281419 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-241270" container
	I1205 07:34:51.644693  281419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:51.664403  281419 cli_runner.go:164] Run: docker volume create no-preload-241270 --label name.minikube.sigs.k8s.io=no-preload-241270 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:51.703596  281419 oci.go:103] Successfully created a docker volume no-preload-241270
	I1205 07:34:51.703699  281419 cli_runner.go:164] Run: docker run --rm --name no-preload-241270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --entrypoint /usr/bin/test -v no-preload-241270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:52.419093  281419 oci.go:107] Successfully prepared a docker volume no-preload-241270
	I1205 07:34:52.419152  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:52.419281  281419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:52.419402  281419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:52.474323  281419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-241270 --name no-preload-241270 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-241270 --network no-preload-241270 --ip 192.168.76.2 --volume no-preload-241270:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:52.844284  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Running}}
	I1205 07:34:52.871353  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:52.893044  281419 cli_runner.go:164] Run: docker exec no-preload-241270 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:52.971944  281419 oci.go:144] the created container "no-preload-241270" has a running status.
	I1205 07:34:52.971975  281419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa...
	I1205 07:34:53.768668  281419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:53.945530  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:53.965986  281419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:53.966005  281419 kic_runner.go:114] Args: [docker exec --privileged no-preload-241270 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:54.059371  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:54.108271  281419 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:54.108367  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.132985  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.133345  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.133356  281419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:54.333364  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.333388  281419 ubuntu.go:182] provisioning hostname "no-preload-241270"
	I1205 07:34:54.333541  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.369719  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.371863  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.371893  281419 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-241270 && echo "no-preload-241270" | sudo tee /etc/hostname
	I1205 07:34:54.574524  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.574606  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.599195  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.599492  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.599509  281419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241270/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:34:54.776549  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:34:54.776662  281419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:34:54.776695  281419 ubuntu.go:190] setting up certificates
	I1205 07:34:54.776705  281419 provision.go:84] configureAuth start
	I1205 07:34:54.776772  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:54.802455  281419 provision.go:143] copyHostCerts
	I1205 07:34:54.802525  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:34:54.802534  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:34:54.802614  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:34:54.802700  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:34:54.802706  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:34:54.802735  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:34:54.802784  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:34:54.802797  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:34:54.802821  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:34:54.802868  281419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.no-preload-241270 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-241270]
	I1205 07:34:55.021879  281419 provision.go:177] copyRemoteCerts
	I1205 07:34:55.021961  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:34:55.022007  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.042198  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.146207  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:34:55.175055  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:34:55.196310  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:34:55.228238  281419 provision.go:87] duration metric: took 451.519136ms to configureAuth
	I1205 07:34:55.228267  281419 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:34:55.228447  281419 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:55.228461  281419 machine.go:97] duration metric: took 1.120172831s to provisionDockerMachine
	I1205 07:34:55.228468  281419 client.go:176] duration metric: took 3.746074827s to LocalClient.Create
	I1205 07:34:55.228481  281419 start.go:167] duration metric: took 3.746124256s to libmachine.API.Create "no-preload-241270"
	I1205 07:34:55.228492  281419 start.go:293] postStartSetup for "no-preload-241270" (driver="docker")
	I1205 07:34:55.228503  281419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:34:55.228562  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:34:55.228610  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.249980  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.367085  281419 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:34:55.370694  281419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:34:55.370723  281419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:34:55.370734  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:34:55.370886  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:34:55.371031  281419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:34:55.371195  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:34:55.385389  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:34:55.415204  281419 start.go:296] duration metric: took 186.696466ms for postStartSetup
	I1205 07:34:55.415546  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.445124  281419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:34:55.445421  281419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:34:55.445469  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.465824  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.582588  281419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:34:55.589753  281419 start.go:128] duration metric: took 4.113009855s to createHost
	I1205 07:34:55.589783  281419 start.go:83] releasing machines lock for "no-preload-241270", held for 4.11313674s
	I1205 07:34:55.589860  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.609280  281419 ssh_runner.go:195] Run: cat /version.json
	I1205 07:34:55.609334  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.609553  281419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:34:55.609603  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.653271  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.667026  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.785816  281419 ssh_runner.go:195] Run: systemctl --version
	I1205 07:34:55.905848  281419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:34:55.913263  281419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:34:55.913352  281419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:34:55.955688  281419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:34:55.955713  281419 start.go:496] detecting cgroup driver to use...
	I1205 07:34:55.955752  281419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:34:55.955807  281419 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:34:55.978957  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:34:55.992668  281419 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:34:55.992774  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:34:56.017505  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:34:56.046827  281419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:34:56.209514  281419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:34:56.405533  281419 docker.go:234] disabling docker service ...
	I1205 07:34:56.405600  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:34:56.470263  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:34:56.503296  281419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:34:56.815584  281419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:34:57.031532  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:34:57.059667  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:34:57.093975  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:34:57.103230  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:34:57.112469  281419 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:34:57.112537  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:34:57.123144  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.134066  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:34:57.144317  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.156950  281419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:34:57.168939  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:34:57.179688  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:34:57.190637  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:34:57.206793  281419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:34:57.215781  281419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:34:57.226983  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:34:57.420977  281419 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:34:57.514033  281419 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:34:57.514159  281419 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:34:57.519057  281419 start.go:564] Will wait 60s for crictl version
	I1205 07:34:57.519141  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:57.523352  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:34:57.554146  281419 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:34:57.554218  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.577679  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.608177  281419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:34:55.134539  282781 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:55.134871  282781 start.go:159] libmachine.API.Create for "newest-cni-622440" (driver="docker")
	I1205 07:34:55.134936  282781 client.go:173] LocalClient.Create starting
	I1205 07:34:55.135040  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:55.135104  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135129  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135215  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:55.135272  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135292  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135778  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:55.152795  282781 cli_runner.go:211] docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:55.152912  282781 network_create.go:284] running [docker network inspect newest-cni-622440] to gather additional debugging logs...
	I1205 07:34:55.152946  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440
	W1205 07:34:55.170809  282781 cli_runner.go:211] docker network inspect newest-cni-622440 returned with exit code 1
	I1205 07:34:55.170837  282781 network_create.go:287] error running [docker network inspect newest-cni-622440]: docker network inspect newest-cni-622440: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-622440 not found
	I1205 07:34:55.170850  282781 network_create.go:289] output of [docker network inspect newest-cni-622440]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-622440 not found
	
	** /stderr **
	I1205 07:34:55.170942  282781 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:55.190601  282781 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:55.190913  282781 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:55.191232  282781 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:55.191506  282781 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-509cbc0434c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:5b:c8:fd:a0:2d} reservation:<nil>}
	I1205 07:34:55.191883  282781 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ab4b80}
	I1205 07:34:55.191903  282781 network_create.go:124] attempt to create docker network newest-cni-622440 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:34:55.191967  282781 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-622440 newest-cni-622440
	I1205 07:34:55.272466  282781 network_create.go:108] docker network newest-cni-622440 192.168.85.0/24 created
	I1205 07:34:55.272497  282781 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-622440" container
	I1205 07:34:55.272584  282781 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:55.299615  282781 cli_runner.go:164] Run: docker volume create newest-cni-622440 --label name.minikube.sigs.k8s.io=newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:55.321227  282781 oci.go:103] Successfully created a docker volume newest-cni-622440
	I1205 07:34:55.321330  282781 cli_runner.go:164] Run: docker run --rm --name newest-cni-622440-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --entrypoint /usr/bin/test -v newest-cni-622440:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:55.874194  282781 oci.go:107] Successfully prepared a docker volume newest-cni-622440
	I1205 07:34:55.874264  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:55.874410  282781 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:55.874535  282781 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:55.945833  282781 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-622440 --name newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-622440 --network newest-cni-622440 --ip 192.168.85.2 --volume newest-cni-622440:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:56.334301  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Running}}
	I1205 07:34:56.365095  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.392463  282781 cli_runner.go:164] Run: docker exec newest-cni-622440 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:56.460482  282781 oci.go:144] the created container "newest-cni-622440" has a running status.
	I1205 07:34:56.460517  282781 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa...
	I1205 07:34:56.767833  282781 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:56.791395  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.811902  282781 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:56.811920  282781 kic_runner.go:114] Args: [docker exec --privileged newest-cni-622440 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:56.902529  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.932575  282781 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:56.932686  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:34:56.953532  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:56.953863  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:34:56.953871  282781 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:56.954513  282781 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43638->127.0.0.1:33093: read: connection reset by peer
	I1205 07:34:57.611218  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:57.631313  281419 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:34:57.635595  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:34:57.647819  281419 kubeadm.go:884] updating cluster {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:34:57.647943  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:57.648012  281419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:34:57.675975  281419 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:34:57.675998  281419 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:34:57.676035  281419 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.676242  281419 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.676321  281419 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.676541  281419 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.676664  281419 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.676744  281419 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.676821  281419 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.677443  281419 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.678747  281419 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.679204  281419 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.679446  281419 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.679490  281419 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.679628  281419 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.679730  281419 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.680191  281419 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.680226  281419 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.993134  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:34:57.993255  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:34:58.022857  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:34:58.022958  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.035702  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:34:58.035816  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.068460  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:34:58.068586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.069026  281419 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:34:58.069090  281419 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:34:58.069183  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.069262  281419 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:34:58.069305  281419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.069349  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.074525  281419 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:34:58.074618  281419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.074694  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.084602  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:34:58.084753  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.093856  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:34:58.093981  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.103085  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.103156  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.103215  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.103214  281419 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:34:58.103271  281419 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.103296  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.115763  281419 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:34:58.115803  281419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.115854  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.116104  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:34:58.116140  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.154653  281419 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:34:58.154740  281419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.154818  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192178  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.192267  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.192272  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.192322  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.192364  281419 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:34:58.192395  281419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.192421  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192479  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.192482  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278470  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.278568  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.278766  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.278598  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.278641  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278681  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.278865  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387623  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387705  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.387774  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:34:58.387840  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.387886  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.387626  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387984  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.388070  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.387990  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387931  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.453644  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.453792  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:34:58.453804  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:34:58.453889  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 07:34:58.453762  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:34:58.453990  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.454049  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:34:58.454052  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:34:58.453951  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.453861  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:34:58.454295  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:34:58.453742  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.454372  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.542254  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.542568  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:34:58.542480  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:34:58.542630  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:34:58.542522  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542738  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:34:58.542768  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:34:58.578716  281419 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.578827  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.610540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.610912  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:34:58.888566  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:34:59.021211  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:59.021289  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1205 07:34:59.068346  281419 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:34:59.068498  281419 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:34:59.068572  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864558  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.795954788s)
	I1205 07:35:00.864602  281419 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:00.864631  281419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864683  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:35:00.864739  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.843433798s)
	I1205 07:35:00.864752  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:00.864766  281419 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.864805  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.873580  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.270776  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.270817  282781 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:35:00.270899  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.371937  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.372299  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.372312  282781 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:35:00.613599  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.613706  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.642684  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.643012  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.643028  282781 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:35:00.802014  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:35:00.802045  282781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:35:00.802091  282781 ubuntu.go:190] setting up certificates
	I1205 07:35:00.802110  282781 provision.go:84] configureAuth start
	I1205 07:35:00.802183  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:00.827426  282781 provision.go:143] copyHostCerts
	I1205 07:35:00.827511  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:35:00.827525  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:35:00.827605  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:35:00.827724  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:35:00.827738  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:35:00.827769  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:35:00.827834  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:35:00.827844  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:35:00.827871  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:35:00.827926  282781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:35:00.956019  282781 provision.go:177] copyRemoteCerts
	I1205 07:35:00.956232  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:35:00.956312  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.978988  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.089461  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:35:01.114938  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:35:01.142325  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:35:01.168254  282781 provision.go:87] duration metric: took 366.116888ms to configureAuth
	I1205 07:35:01.168340  282781 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:35:01.168591  282781 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:35:01.168634  282781 machine.go:97] duration metric: took 4.236039989s to provisionDockerMachine
	I1205 07:35:01.168665  282781 client.go:176] duration metric: took 6.033716203s to LocalClient.Create
	I1205 07:35:01.168718  282781 start.go:167] duration metric: took 6.033833045s to libmachine.API.Create "newest-cni-622440"
	I1205 07:35:01.168742  282781 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:35:01.168766  282781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:35:01.168850  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:35:01.168915  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.192294  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.311598  282781 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:35:01.315486  282781 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:35:01.315516  282781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:35:01.315528  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:35:01.315596  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:35:01.315698  282781 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:35:01.315872  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:35:01.326201  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:01.345964  282781 start.go:296] duration metric: took 177.196121ms for postStartSetup
	I1205 07:35:01.346371  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.368578  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:35:01.369047  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:35:01.369150  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.391110  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.495164  282781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:35:01.500376  282781 start.go:128] duration metric: took 6.371211814s to createHost
	I1205 07:35:01.500460  282781 start.go:83] releasing machines lock for "newest-cni-622440", held for 6.371509385s
	I1205 07:35:01.500553  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.520704  282781 ssh_runner.go:195] Run: cat /version.json
	I1205 07:35:01.520755  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.520758  282781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:35:01.520826  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.542832  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.554863  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.750909  282781 ssh_runner.go:195] Run: systemctl --version
	I1205 07:35:01.758230  282781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:35:01.763670  282781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:35:01.763742  282781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:35:01.797683  282781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:35:01.797709  282781 start.go:496] detecting cgroup driver to use...
	I1205 07:35:01.797743  282781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:35:01.797800  282781 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:35:01.813916  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:35:01.835990  282781 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:35:01.836078  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:35:01.856191  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:35:01.879473  282781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:35:02.016063  282781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:35:02.186714  282781 docker.go:234] disabling docker service ...
	I1205 07:35:02.186836  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:35:02.211433  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:35:02.226230  282781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:35:02.421061  282781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:35:02.574247  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:35:02.588525  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:35:02.604182  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:35:02.613394  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:35:02.623017  282781 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:35:02.623089  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:35:02.632544  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.643699  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:35:02.656090  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.667307  282781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:35:02.675494  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:35:02.685933  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:35:02.697515  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:35:02.708706  282781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:35:02.723371  282781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:35:02.736002  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:02.875115  282781 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:35:02.963803  282781 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:35:02.963902  282781 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:35:02.970220  282781 start.go:564] Will wait 60s for crictl version
	I1205 07:35:02.970310  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:02.974813  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:35:03.021266  282781 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:35:03.021367  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.047120  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.073256  282781 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:35:03.076375  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:35:03.098294  282781 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:35:03.105202  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:03.120382  282781 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:35:03.123255  282781 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:35:03.123408  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:35:03.123487  282781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:35:03.154394  282781 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:35:03.154422  282781 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:35:03.154478  282781 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.154682  282781 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.154778  282781 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.154866  282781 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.154957  282781 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.155040  282781 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.155127  282781 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.155218  282781 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.156724  282781 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.157068  282781 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.157467  282781 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.157620  282781 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.157862  282781 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.158016  282781 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.158145  282781 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.158257  282781 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.462330  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:35:03.462445  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.474342  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:35:03.474456  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.482905  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:35:03.483018  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:35:03.493712  282781 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:35:03.493818  282781 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.493879  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.495878  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:35:03.495977  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.503824  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:35:03.503953  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.548802  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:35:03.548918  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.563856  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:35:03.563966  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.564379  282781 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:35:03.564443  282781 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.564494  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564588  282781 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:35:03.564625  282781 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.564664  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564745  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.577731  282781 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:35:03.577812  282781 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.577873  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.594067  282781 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:35:03.594158  282781 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.594222  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.638413  282781 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:35:03.638520  282781 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.638583  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.647984  282781 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:35:03.648065  282781 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.648135  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.654578  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.654695  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.654792  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.654879  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.654956  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.659132  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.660475  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856118  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.856393  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.856229  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.856314  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.856258  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.856389  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856356  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073452  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073547  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:04.073616  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:04.073671  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:04.073727  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:35:04.073796  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:04.073863  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:04.073966  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:04.231226  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231396  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231480  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231559  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231639  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231719  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231791  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:35:04.231868  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:04.231943  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232023  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232099  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:35:04.232145  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:35:04.232313  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:35:04.232384  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:04.287892  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:35:04.287988  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288174  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:35:04.288039  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288247  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:35:04.288060  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288276  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:35:04.288078  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:35:04.288307  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:35:04.288093  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288348  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:35:04.288138  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1205 07:35:04.301689  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301786  282781 retry.go:31] will retry after 208.795928ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301815  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301854  282781 retry.go:31] will retry after 334.580121ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301882  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301902  282781 retry.go:31] will retry after 333.510577ms: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.510761  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.553911  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:02.712615  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.847781055s)
	I1205 07:35:02.712638  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:02.712660  281419 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712732  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712799  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.839195579s)
	I1205 07:35:02.712834  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087126  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.374270081s)
	I1205 07:35:04.087198  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087256  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.374512799s)
	I1205 07:35:04.087266  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:04.087283  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.087309  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:05.800879  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.713547867s)
	I1205 07:35:05.800904  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:05.800922  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.800970  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.801018  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.713803361s)
	I1205 07:35:05.801061  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:05.801141  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1205 07:35:04.593101  282781 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:35:04.593340  282781 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:35:04.593425  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.593492  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.620265  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.635918  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.637258  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.700758  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.710820  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.947887  282781 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:04.947982  282781 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.948060  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:05.052764  282781 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.052875  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.108225  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550590  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550699  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:35:05.550751  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:05.550805  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:07.127585  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.576745452s)
	I1205 07:35:07.127612  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:07.127630  282781 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127690  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127752  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.577096553s)
	I1205 07:35:07.127791  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:08.530003  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.40218711s)
	I1205 07:35:08.530052  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:08.530145  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.530206  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.402499844s)
	I1205 07:35:08.530219  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:08.530234  282781 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:08.530258  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:07.217489  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.416494396s)
	I1205 07:35:07.217512  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:07.217529  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217647  281419 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.416497334s)
	I1205 07:35:07.217660  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:07.217673  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:08.607664  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.390055936s)
	I1205 07:35:08.607697  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:08.607718  281419 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.607767  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:09.100321  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:09.100358  281419 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:09.100365  281419 cache_images.go:94] duration metric: took 11.42435306s to LoadCachedImages
	I1205 07:35:09.100377  281419 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:09.100482  281419 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-241270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:09.100558  281419 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:09.129301  281419 cni.go:84] Creating CNI manager for ""
	I1205 07:35:09.129326  281419 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:09.129345  281419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:35:09.129377  281419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241270 NodeName:no-preload-241270 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:09.129497  281419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-241270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:09.129569  281419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.142095  281419 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:09.142170  281419 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.156065  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:09.156176  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:09.156262  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:09.156299  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:09.156377  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:09.156425  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:09.179830  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:09.179870  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:09.179956  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:09.179975  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:09.180072  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:09.198397  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:09.198485  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:10.286113  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:10.299161  281419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:10.316251  281419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:10.331159  281419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 07:35:10.345735  281419 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:10.350335  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:10.363402  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:10.512811  281419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:10.529558  281419 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270 for IP: 192.168.76.2
	I1205 07:35:10.529629  281419 certs.go:195] generating shared ca certs ...
	I1205 07:35:10.529657  281419 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.529834  281419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:10.529923  281419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:10.529958  281419 certs.go:257] generating profile certs ...
	I1205 07:35:10.530038  281419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key
	I1205 07:35:10.530076  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt with IP's: []
	I1205 07:35:10.853605  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt ...
	I1205 07:35:10.853638  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt: {Name:mk2a843840c6e4a2de14fc26103351bbaff83f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.854971  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key ...
	I1205 07:35:10.854994  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key: {Name:mk2141bc22495cb299c026ddfd70c2cab1c5df09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.855117  281419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330
	I1205 07:35:10.855143  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1205 07:35:11.172976  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 ...
	I1205 07:35:11.173007  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330: {Name:mk727b4727c68f439905180851e5f305719107ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.173862  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 ...
	I1205 07:35:11.173894  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330: {Name:mk05e994b799e7321fe9fd9419571307eec1a124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.174674  281419 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt
	I1205 07:35:11.174770  281419 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key
	I1205 07:35:11.174852  281419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key
	I1205 07:35:11.174872  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt with IP's: []
	I1205 07:35:11.350910  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt ...
	I1205 07:35:11.350948  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt: {Name:mk7c9be3a839b00f099d02f39817919630f828cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.352352  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key ...
	I1205 07:35:11.352386  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key: {Name:mkf516ee46be6e2698cf5a62147058f957abc08a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.353684  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:11.353744  281419 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:11.353758  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:11.353787  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:11.353817  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:11.353849  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:11.353898  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:11.354490  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:11.381382  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:11.406241  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:11.428183  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:11.450978  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:11.476407  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:11.498851  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:11.519352  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:11.539765  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:11.559484  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:11.579911  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:11.600685  281419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:11.616084  281419 ssh_runner.go:195] Run: openssl version
	I1205 07:35:11.625728  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.635065  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:11.645233  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651040  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651153  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.693810  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.702555  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.710996  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.719477  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:11.727857  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732743  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732862  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.774767  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:11.783345  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:11.791961  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.801063  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:11.809888  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.814918  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.815034  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.857224  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:11.866093  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:11.874706  281419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:11.879598  281419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:11.879697  281419 kubeadm.go:401] StartCluster: {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:11.879803  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:11.879898  281419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:11.908036  281419 cri.go:89] found id: ""
	I1205 07:35:11.908156  281419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:11.919349  281419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:11.928155  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:11.928267  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:11.939709  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:11.939779  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:11.939856  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:11.949257  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:11.949365  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:11.957760  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:11.967055  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:11.967163  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:11.975295  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.984686  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:11.984797  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.994202  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:12.005520  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:12.005606  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:12.026031  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:12.083192  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:35:12.083309  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:35:12.193051  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:35:12.193150  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:35:12.193215  281419 kubeadm.go:319] OS: Linux
	I1205 07:35:12.193261  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:35:12.193313  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:35:12.193374  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:35:12.193426  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:35:12.193479  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:35:12.193529  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:35:12.193578  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:35:12.193684  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:35:12.193786  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:35:12.268365  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:35:12.268486  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:35:12.268582  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:35:12.276338  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:35:10.757563  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.227284144s)
	I1205 07:35:10.757586  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:10.757606  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757654  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757716  282781 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.227556574s)
	I1205 07:35:10.757730  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:10.757745  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:12.017290  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.259613359s)
	I1205 07:35:12.017315  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:12.017333  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:12.017393  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:13.470638  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.453225657s)
	I1205 07:35:13.470663  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:13.470680  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:13.470727  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:12.281185  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:35:12.281356  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:35:12.281459  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:35:12.381667  281419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:35:12.863385  281419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:35:13.114787  281419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:35:13.312565  281419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:35:13.794303  281419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:35:13.794935  281419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.299804  281419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:35:14.300371  281419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.449360  281419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:35:14.671722  281419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:35:15.172052  281419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:35:15.174002  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:35:15.463292  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:35:16.096919  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:35:16.336520  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:35:16.828502  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:35:17.109506  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:35:17.109613  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:35:17.109687  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:35:15.103687  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.632919174s)
	I1205 07:35:15.103711  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:15.103732  282781 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.103783  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.621241  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:15.621272  282781 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:15.621278  282781 cache_images.go:94] duration metric: took 12.466843247s to LoadCachedImages
	I1205 07:35:15.621292  282781 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:15.621381  282781 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:15.621444  282781 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:15.654017  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:35:15.654037  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:15.654053  282781 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:35:15.654081  282781 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:15.654199  282781 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:15.654267  282781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.664199  282781 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:15.664254  282781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.672856  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:15.672884  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:15.672938  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:15.672957  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:15.672855  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:15.672995  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:15.699685  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:15.699722  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:15.699741  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:15.699766  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:15.715022  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:15.749908  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:15.749948  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:16.655429  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:16.670290  282781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:16.693587  282781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:16.711778  282781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:35:16.725821  282781 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:16.730355  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:16.740137  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:16.867916  282781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:16.883411  282781 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:35:16.883478  282781 certs.go:195] generating shared ca certs ...
	I1205 07:35:16.883521  282781 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:16.883711  282781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:16.883800  282781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:16.883837  282781 certs.go:257] generating profile certs ...
	I1205 07:35:16.883935  282781 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:35:16.883965  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt with IP's: []
	I1205 07:35:17.059440  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt ...
	I1205 07:35:17.059534  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt: {Name:mk4216fda7b2560e6bf3adab97ae3109b56cd861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.059812  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key ...
	I1205 07:35:17.059867  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key: {Name:mk6502f52b6a29fc92d89b24a9497a31259c0a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.061509  282781 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:35:17.061580  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:35:17.406723  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 ...
	I1205 07:35:17.406756  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8: {Name:mk48869d32b8a5be7389357c612f9688b7f98edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407538  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 ...
	I1205 07:35:17.407563  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8: {Name:mk39f9d896537098c3c994d4ce7924ee6a49dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407660  282781 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt
	I1205 07:35:17.407739  282781 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key
	I1205 07:35:17.407802  282781 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:35:17.407822  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt with IP's: []
	I1205 07:35:17.656775  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt ...
	I1205 07:35:17.656814  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt: {Name:mkf75c55fc25a5343874cbc403686708a7f26c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657007  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key ...
	I1205 07:35:17.657024  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key: {Name:mk9047fe05ee73b34ef5e42f150f28bde6ac00b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657241  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:17.657291  282781 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:17.657303  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:17.657332  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:17.657363  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:17.657390  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:17.657440  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:17.658030  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:17.677123  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:17.695559  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:17.713701  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:17.731347  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:17.749295  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:17.766915  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:17.783871  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:17.801244  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:17.819265  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:17.836390  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:17.860517  282781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:17.875166  282781 ssh_runner.go:195] Run: openssl version
	I1205 07:35:17.882955  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.891095  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:17.899082  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903708  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903782  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.945497  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.952956  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.960147  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.967438  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:17.974447  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.977974  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.978088  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:18.019263  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:18.027845  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:18.036126  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.044084  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:18.052338  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056629  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056703  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.099363  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:18.107989  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:18.116260  282781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:18.120762  282781 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:18.120819  282781 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:18.120900  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:18.120961  282781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:18.149219  282781 cri.go:89] found id: ""
	I1205 07:35:18.149296  282781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:18.159871  282781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:18.168276  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:18.168340  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:18.176150  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:18.176181  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:18.176234  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:18.184056  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:18.184125  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:18.191302  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:18.198850  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:18.198918  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:18.206439  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.213847  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:18.213913  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.220993  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:18.228433  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:18.228548  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:18.235813  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:18.359095  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:35:18.359647  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:35:18.423544  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:35:17.113932  281419 out.go:252]   - Booting up control plane ...
	I1205 07:35:17.114055  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:35:17.130916  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:35:17.131000  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:35:17.144923  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:35:17.145031  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:35:17.153033  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:35:17.153136  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:35:17.153238  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:35:17.320155  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:35:17.320276  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:17.318333  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000477824s
	I1205 07:39:17.318360  281419 kubeadm.go:319] 
	I1205 07:39:17.318428  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:17.318462  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:17.318567  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:17.318571  281419 kubeadm.go:319] 
	I1205 07:39:17.318675  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:17.318708  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:17.318739  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:39:17.318744  281419 kubeadm.go:319] 
	I1205 07:39:17.323674  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:39:17.324139  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:39:17.324260  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:39:17.324546  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:39:17.324556  281419 kubeadm.go:319] 
	I1205 07:39:17.324629  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 07:39:17.324734  281419 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000477824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:17.324832  281419 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:17.734892  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:17.749336  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:17.749399  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:17.757730  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:17.757790  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:17.757850  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:17.766487  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:17.766564  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:17.774523  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:17.782748  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:17.782816  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:17.790744  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.798734  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:17.798821  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.806627  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:17.814519  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:17.814588  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:17.822487  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:17.863307  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:17.863481  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:17.933763  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:17.933840  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:17.933891  281419 kubeadm.go:319] OS: Linux
	I1205 07:39:17.933940  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:17.933992  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:17.934041  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:17.934092  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:17.934143  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:17.934200  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:17.934250  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:17.934300  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:17.934350  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:18.005121  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:18.005386  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:18.005505  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:18.013422  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:18.015372  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:18.015478  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:18.015552  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:18.015718  281419 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:18.016366  281419 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:18.016626  281419 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:18.017069  281419 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:18.017546  281419 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:18.017846  281419 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:18.018157  281419 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:18.018500  281419 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:18.018795  281419 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:18.018893  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:18.103696  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:18.482070  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:18.757043  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:18.907937  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:19.448057  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:19.448772  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:19.451764  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:19.453331  281419 out.go:252]   - Booting up control plane ...
	I1205 07:39:19.453502  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:19.453624  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:19.454383  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:19.477703  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:19.478043  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:19.486387  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:19.486517  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:19.486561  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:19.636438  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:19.636619  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.111676  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:39:22.111715  282781 kubeadm.go:319] 
	I1205 07:39:22.111850  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:39:22.120229  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.120296  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.120393  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.120460  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.120499  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.120549  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.120597  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.120654  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.120706  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.120774  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.120826  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.120871  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.120918  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.120970  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.121046  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.121144  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.121260  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.121329  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.122793  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.122965  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.123105  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.123184  282781 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:39:22.123243  282781 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:39:22.123304  282781 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:39:22.123355  282781 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:39:22.123409  282781 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:39:22.123531  282781 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123598  282781 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:39:22.123723  282781 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123789  282781 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:39:22.123857  282781 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:39:22.123902  282781 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:39:22.123959  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:22.124010  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:22.124072  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:22.124127  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:22.124191  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:22.124251  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:22.124334  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:22.124401  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:22.125727  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:22.125831  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:22.125912  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:22.125982  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:22.126088  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:22.126182  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:22.126289  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:22.126374  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:22.126419  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:22.126558  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:22.126665  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.126733  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000670148s
	I1205 07:39:22.126738  282781 kubeadm.go:319] 
	I1205 07:39:22.126805  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:22.126840  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:22.126951  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:22.126955  282781 kubeadm.go:319] 
	I1205 07:39:22.127067  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:22.127100  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:22.127131  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 07:39:22.127242  282781 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000670148s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:22.127337  282781 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:22.127648  282781 kubeadm.go:319] 
	I1205 07:39:22.555931  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:22.571474  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:22.571542  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:22.579138  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:22.579159  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:22.579236  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:22.586998  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:22.587095  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:22.597974  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:22.612071  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:22.612169  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:22.620438  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.629905  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:22.629992  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.637890  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:22.646753  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:22.646849  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:22.655118  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:22.694938  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.695040  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.766969  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.767067  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.767130  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.767228  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.767293  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.767344  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.767408  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.767460  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.767518  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.767564  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.767626  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.767685  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.833955  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.834079  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.834176  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.845649  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.848548  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.848634  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.848703  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.848782  282781 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:22.848843  282781 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:22.848912  282781 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:22.848966  282781 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:22.849031  282781 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:22.849092  282781 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:22.849211  282781 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:22.849285  282781 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:22.849326  282781 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:22.849379  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:23.141457  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:23.628614  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:24.042217  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:24.241513  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:24.738880  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:24.739414  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:24.742365  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:24.744249  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:24.744385  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:24.744476  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:24.746446  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:24.766106  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:24.766217  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:24.773547  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:24.773863  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:24.773913  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:24.911724  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:24.911843  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:43:19.629743  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000979602s
	I1205 07:43:19.629776  281419 kubeadm.go:319] 
	I1205 07:43:19.629841  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:19.629881  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:19.629992  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:19.630000  281419 kubeadm.go:319] 
	I1205 07:43:19.630105  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:19.630141  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:19.630176  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:19.630185  281419 kubeadm.go:319] 
	I1205 07:43:19.633703  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:19.634129  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:19.634243  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:19.634512  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:19.634521  281419 kubeadm.go:319] 
	I1205 07:43:19.634601  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:19.634654  281419 kubeadm.go:403] duration metric: took 8m7.754963643s to StartCluster
	I1205 07:43:19.634689  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:19.634770  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:19.664154  281419 cri.go:89] found id: ""
	I1205 07:43:19.664178  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.664186  281419 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:19.664194  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:19.664259  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:19.688943  281419 cri.go:89] found id: ""
	I1205 07:43:19.689027  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.689051  281419 logs.go:284] No container was found matching "etcd"
	I1205 07:43:19.689071  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:19.689145  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:19.714243  281419 cri.go:89] found id: ""
	I1205 07:43:19.714266  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.714278  281419 logs.go:284] No container was found matching "coredns"
	I1205 07:43:19.714285  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:19.714344  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:19.739300  281419 cri.go:89] found id: ""
	I1205 07:43:19.739326  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.739334  281419 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:19.739341  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:19.739409  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:19.764133  281419 cri.go:89] found id: ""
	I1205 07:43:19.764158  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.764168  281419 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:19.764174  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:19.764233  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:19.791591  281419 cri.go:89] found id: ""
	I1205 07:43:19.791655  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.791670  281419 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:19.791678  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:19.791736  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:19.817073  281419 cri.go:89] found id: ""
	I1205 07:43:19.817096  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.817104  281419 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:19.817113  281419 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:19.817124  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:19.884361  281419 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:19.886664  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:43:19.933532  281419 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:19.933565  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:20.000746  281419 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:20.000782  281419 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:20.000794  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:20.048127  281419 logs.go:123] Gathering logs for container status ...
	I1205 07:43:20.048164  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:43:20.079198  281419 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:20.079257  281419 out.go:285] * 
	W1205 07:43:20.079339  281419 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.079395  281419 out.go:285] * 
	W1205 07:43:20.081583  281419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:20.084896  281419 out.go:203] 
	W1205 07:43:20.086596  281419 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.086704  281419 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:20.086780  281419 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:20.088336  281419 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:35:00 no-preload-241270 containerd[758]: time="2025-12-05T07:35:00.872186619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.701941885Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.704289218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.722125402Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.722911774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.075081950Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.078766218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.099917836Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.100531825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.790505473Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.792674113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.806940960Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.807327368Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.207463637Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.209905191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.218221241Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.219001377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.595991834Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.598386708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.607030393Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.608072538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.091545558Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.093932416Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.108389516Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.108843487Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:21.239687    5640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:21.240516    5640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:21.242074    5640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:21.242394    5640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:21.243830    5640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:43:21 up  2:25,  0 user,  load average: 0.69, 1.05, 1.70
	Linux no-preload-241270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:43:17 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:18 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 05 07:43:18 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:18 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:18 no-preload-241270 kubelet[5447]: E1205 07:43:18.404454    5447 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:18 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:18 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:19 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 05 07:43:19 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:19 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:19 no-preload-241270 kubelet[5453]: E1205 07:43:19.159332    5453 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:19 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:19 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:19 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 05 07:43:19 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:19 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:19 no-preload-241270 kubelet[5516]: E1205 07:43:19.937628    5516 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:19 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:19 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:20 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 07:43:20 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:20 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:20 no-preload-241270 kubelet[5558]: E1205 07:43:20.696996    5558 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:20 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:20 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 6 (318.303373ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:43:21.798832  294182 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (510.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (512.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1205 07:35:06.310693    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:06.317007    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:06.328318    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:06.349623    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:06.390973    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:06.472344    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:06.633734    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:06.955071    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:07.597018    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:08.879261    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:11.441499    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:16.562817    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:26.804892    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:35:47.286376    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:36:04.877348    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:36:28.248456    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:37:00.038299    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:37:16.968265    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:37:50.172368    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:01.797519    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:11.309099    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:11.315456    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:11.326911    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:11.348359    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:11.389923    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:11.471521    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:11.632997    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:11.954822    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:12.596910    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:13.878246    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:16.440438    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:21.562397    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:31.803846    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:52.285220    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:39:14.019840    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:39:33.246729    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:40:06.310455    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:40:34.015865    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:40:55.168117    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:42:16.968522    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:42:17.095393    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:43:01.797852    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:43:11.309391    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m30.998036771s)

                                                
                                                
-- stdout --
	* [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:34:54.564320  282781 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:34:54.564546  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564575  282781 out.go:374] Setting ErrFile to fd 2...
	I1205 07:34:54.564598  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564902  282781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:34:54.565440  282781 out.go:368] Setting JSON to false
	I1205 07:34:54.566401  282781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8241,"bootTime":1764911853,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:34:54.566509  282781 start.go:143] virtualization:  
	I1205 07:34:54.570672  282781 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:34:54.575010  282781 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:34:54.575073  282781 notify.go:221] Checking for updates...
	I1205 07:34:54.579441  282781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:34:54.582467  282781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:34:54.587377  282781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:34:54.590331  282781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:34:54.593234  282781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:34:54.596734  282781 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:54.596829  282781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:34:54.638746  282781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:34:54.638881  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.723110  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-05 07:34:54.71373112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.723208  282781 docker.go:319] overlay module found
	I1205 07:34:54.726530  282781 out.go:179] * Using the docker driver based on user configuration
	I1205 07:34:54.729826  282781 start.go:309] selected driver: docker
	I1205 07:34:54.729851  282781 start.go:927] validating driver "docker" against <nil>
	I1205 07:34:54.729865  282781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:34:54.730603  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.814061  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:34:54.80392623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.814216  282781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1205 07:34:54.814233  282781 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 07:34:54.814448  282781 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:34:54.817656  282781 out.go:179] * Using Docker driver with root privileges
	I1205 07:34:54.820449  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:34:54.820517  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:34:54.820533  282781 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:34:54.820632  282781 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:34:54.823652  282781 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:34:54.826400  282781 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:34:54.829321  282781 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:34:54.832159  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:54.832346  282781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:34:54.866220  282781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:34:54.866240  282781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:34:54.905418  282781 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:34:55.127272  282781 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:34:55.127472  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:34:55.127510  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json: {Name:mk199da181ecffa13d15cfa2c7c654b0a370d7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:55.127517  282781 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127770  282781 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127814  282781 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127984  282781 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128114  282781 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128248  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:34:55.128265  282781 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 153.635µs
	I1205 07:34:55.128280  282781 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128249  282781 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128370  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:34:55.128400  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:34:55.128415  282781 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 907.013µs
	I1205 07:34:55.128428  282781 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:34:55.128407  282781 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 179.719µs
	I1205 07:34:55.128464  282781 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128383  282781 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:34:55.128510  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:34:55.128522  282781 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 71.566µs
	I1205 07:34:55.128528  282781 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:34:55.128441  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:34:55.128638  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:34:55.128687  282781 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 705.903µs
	I1205 07:34:55.128729  282781 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:34:55.128474  282781 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:34:55.128644  282781 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 879.419µs
	I1205 07:34:55.128808  282781 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128298  282781 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128601  282781 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128935  282781 start.go:364] duration metric: took 65.568µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:34:55.128666  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:34:55.128988  282781 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 1.179238ms
	I1205 07:34:55.129009  282781 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128849  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:34:55.129040  282781 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 743.557µs
	I1205 07:34:55.129066  282781 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:34:55.129099  282781 cache.go:87] Successfully saved all images to host disk.
	I1205 07:34:55.128980  282781 start.go:93] Provisioning new machine with config: &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:34:55.129144  282781 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:34:55.134539  282781 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:55.134871  282781 start.go:159] libmachine.API.Create for "newest-cni-622440" (driver="docker")
	I1205 07:34:55.134936  282781 client.go:173] LocalClient.Create starting
	I1205 07:34:55.135040  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:55.135104  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135129  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135215  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:55.135272  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135292  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135778  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:55.152795  282781 cli_runner.go:211] docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:55.152912  282781 network_create.go:284] running [docker network inspect newest-cni-622440] to gather additional debugging logs...
	I1205 07:34:55.152946  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440
	W1205 07:34:55.170809  282781 cli_runner.go:211] docker network inspect newest-cni-622440 returned with exit code 1
	I1205 07:34:55.170837  282781 network_create.go:287] error running [docker network inspect newest-cni-622440]: docker network inspect newest-cni-622440: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-622440 not found
	I1205 07:34:55.170850  282781 network_create.go:289] output of [docker network inspect newest-cni-622440]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-622440 not found
	
	** /stderr **
	I1205 07:34:55.170942  282781 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:55.190601  282781 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:55.190913  282781 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:55.191232  282781 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:55.191506  282781 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-509cbc0434c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:5b:c8:fd:a0:2d} reservation:<nil>}
	I1205 07:34:55.191883  282781 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ab4b80}
	I1205 07:34:55.191903  282781 network_create.go:124] attempt to create docker network newest-cni-622440 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:34:55.191967  282781 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-622440 newest-cni-622440
	I1205 07:34:55.272466  282781 network_create.go:108] docker network newest-cni-622440 192.168.85.0/24 created
	I1205 07:34:55.272497  282781 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-622440" container
	I1205 07:34:55.272584  282781 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:55.299615  282781 cli_runner.go:164] Run: docker volume create newest-cni-622440 --label name.minikube.sigs.k8s.io=newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:55.321227  282781 oci.go:103] Successfully created a docker volume newest-cni-622440
	I1205 07:34:55.321330  282781 cli_runner.go:164] Run: docker run --rm --name newest-cni-622440-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --entrypoint /usr/bin/test -v newest-cni-622440:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:55.874194  282781 oci.go:107] Successfully prepared a docker volume newest-cni-622440
	I1205 07:34:55.874264  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:55.874410  282781 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:55.874535  282781 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:55.945833  282781 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-622440 --name newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-622440 --network newest-cni-622440 --ip 192.168.85.2 --volume newest-cni-622440:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:56.334301  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Running}}
	I1205 07:34:56.365095  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.392463  282781 cli_runner.go:164] Run: docker exec newest-cni-622440 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:56.460482  282781 oci.go:144] the created container "newest-cni-622440" has a running status.
	I1205 07:34:56.460517  282781 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa...
	I1205 07:34:56.767833  282781 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:56.791395  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.811902  282781 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:56.811920  282781 kic_runner.go:114] Args: [docker exec --privileged newest-cni-622440 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:56.902529  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.932575  282781 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:56.932686  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:34:56.953532  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:56.953863  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:34:56.953871  282781 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:56.954513  282781 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43638->127.0.0.1:33093: read: connection reset by peer
	I1205 07:35:00.270776  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.270817  282781 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:35:00.270899  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.371937  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.372299  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.372312  282781 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:35:00.613599  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.613706  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.642684  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.643012  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.643028  282781 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:35:00.802014  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:35:00.802045  282781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:35:00.802091  282781 ubuntu.go:190] setting up certificates
	I1205 07:35:00.802110  282781 provision.go:84] configureAuth start
	I1205 07:35:00.802183  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:00.827426  282781 provision.go:143] copyHostCerts
	I1205 07:35:00.827511  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:35:00.827525  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:35:00.827605  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:35:00.827724  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:35:00.827738  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:35:00.827769  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:35:00.827834  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:35:00.827844  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:35:00.827871  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:35:00.827926  282781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:35:00.956019  282781 provision.go:177] copyRemoteCerts
	I1205 07:35:00.956232  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:35:00.956312  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.978988  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.089461  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:35:01.114938  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:35:01.142325  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:35:01.168254  282781 provision.go:87] duration metric: took 366.116888ms to configureAuth
	I1205 07:35:01.168340  282781 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:35:01.168591  282781 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:35:01.168634  282781 machine.go:97] duration metric: took 4.236039989s to provisionDockerMachine
	I1205 07:35:01.168665  282781 client.go:176] duration metric: took 6.033716203s to LocalClient.Create
	I1205 07:35:01.168718  282781 start.go:167] duration metric: took 6.033833045s to libmachine.API.Create "newest-cni-622440"
	I1205 07:35:01.168742  282781 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:35:01.168766  282781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:35:01.168850  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:35:01.168915  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.192294  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.311598  282781 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:35:01.315486  282781 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:35:01.315516  282781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:35:01.315528  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:35:01.315596  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:35:01.315698  282781 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:35:01.315872  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:35:01.326201  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:01.345964  282781 start.go:296] duration metric: took 177.196121ms for postStartSetup
	I1205 07:35:01.346371  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.368578  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:35:01.369047  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:35:01.369150  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.391110  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.495164  282781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:35:01.500376  282781 start.go:128] duration metric: took 6.371211814s to createHost
	I1205 07:35:01.500460  282781 start.go:83] releasing machines lock for "newest-cni-622440", held for 6.371509385s
	I1205 07:35:01.500553  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.520704  282781 ssh_runner.go:195] Run: cat /version.json
	I1205 07:35:01.520755  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.520758  282781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:35:01.520826  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.542832  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.554863  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.750909  282781 ssh_runner.go:195] Run: systemctl --version
	I1205 07:35:01.758230  282781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:35:01.763670  282781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:35:01.763742  282781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:35:01.797683  282781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:35:01.797709  282781 start.go:496] detecting cgroup driver to use...
	I1205 07:35:01.797743  282781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:35:01.797800  282781 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:35:01.813916  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:35:01.835990  282781 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:35:01.836078  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:35:01.856191  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:35:01.879473  282781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:35:02.016063  282781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:35:02.186714  282781 docker.go:234] disabling docker service ...
	I1205 07:35:02.186836  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:35:02.211433  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:35:02.226230  282781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:35:02.421061  282781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:35:02.574247  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:35:02.588525  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:35:02.604182  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:35:02.613394  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:35:02.623017  282781 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:35:02.623089  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:35:02.632544  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.643699  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:35:02.656090  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.667307  282781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:35:02.675494  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:35:02.685933  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:35:02.697515  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:35:02.708706  282781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:35:02.723371  282781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:35:02.736002  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:02.875115  282781 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:35:02.963803  282781 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:35:02.963902  282781 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:35:02.970220  282781 start.go:564] Will wait 60s for crictl version
	I1205 07:35:02.970310  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:02.974813  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:35:03.021266  282781 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:35:03.021367  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.047120  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.073256  282781 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:35:03.076375  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:35:03.098294  282781 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:35:03.105202  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:03.120382  282781 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:35:03.123255  282781 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:35:03.123408  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:35:03.123487  282781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:35:03.154394  282781 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:35:03.154422  282781 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:35:03.154478  282781 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.154682  282781 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.154778  282781 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.154866  282781 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.154957  282781 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.155040  282781 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.155127  282781 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.155218  282781 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.156724  282781 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.157068  282781 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.157467  282781 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.157620  282781 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.157862  282781 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.158016  282781 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.158145  282781 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.158257  282781 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.462330  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:35:03.462445  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.474342  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:35:03.474456  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.482905  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:35:03.483018  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:35:03.493712  282781 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:35:03.493818  282781 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.493879  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.495878  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:35:03.495977  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.503824  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:35:03.503953  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.548802  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:35:03.548918  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.563856  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:35:03.563966  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.564379  282781 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:35:03.564443  282781 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.564494  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564588  282781 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:35:03.564625  282781 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.564664  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564745  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.577731  282781 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:35:03.577812  282781 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.577873  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.594067  282781 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:35:03.594158  282781 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.594222  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.638413  282781 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:35:03.638520  282781 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.638583  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.647984  282781 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:35:03.648065  282781 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.648135  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.654578  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.654695  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.654792  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.654879  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.654956  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.659132  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.660475  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856118  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.856393  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.856229  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.856314  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.856258  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.856389  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856356  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073452  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073547  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:04.073616  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:04.073671  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:04.073727  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:35:04.073796  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:04.073863  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:04.073966  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:04.231226  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231396  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231480  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231559  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231639  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231719  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231791  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:35:04.231868  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:04.231943  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232023  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232099  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:35:04.232145  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:35:04.232313  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:35:04.232384  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:04.287892  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:35:04.287988  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288174  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:35:04.288039  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288247  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:35:04.288060  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288276  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:35:04.288078  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:35:04.288307  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:35:04.288093  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288348  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:35:04.288138  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1205 07:35:04.301689  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301786  282781 retry.go:31] will retry after 208.795928ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301815  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301854  282781 retry.go:31] will retry after 334.580121ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301882  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301902  282781 retry.go:31] will retry after 333.510577ms: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.510761  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.553911  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	W1205 07:35:04.593101  282781 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:35:04.593340  282781 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:35:04.593425  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.593492  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.620265  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.635918  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.637258  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.700758  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.710820  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.947887  282781 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:04.947982  282781 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.948060  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:05.052764  282781 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.052875  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.108225  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550590  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550699  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:35:05.550751  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:05.550805  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:07.127585  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.576745452s)
	I1205 07:35:07.127612  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:07.127630  282781 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127690  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127752  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.577096553s)
	I1205 07:35:07.127791  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:08.530003  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.40218711s)
	I1205 07:35:08.530052  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:08.530145  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.530206  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.402499844s)
	I1205 07:35:08.530219  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:08.530234  282781 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:08.530258  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:10.757563  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.227284144s)
	I1205 07:35:10.757586  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:10.757606  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757654  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757716  282781 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.227556574s)
	I1205 07:35:10.757730  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:10.757745  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:12.017290  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.259613359s)
	I1205 07:35:12.017315  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:12.017333  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:12.017393  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:13.470638  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.453225657s)
	I1205 07:35:13.470663  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:13.470680  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:13.470727  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:15.103687  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.632919174s)
	I1205 07:35:15.103711  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:15.103732  282781 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.103783  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.621241  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:15.621272  282781 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:15.621278  282781 cache_images.go:94] duration metric: took 12.466843247s to LoadCachedImages
	I1205 07:35:15.621292  282781 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:15.621381  282781 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:15.621444  282781 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:15.654017  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:35:15.654037  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:15.654053  282781 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:35:15.654081  282781 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:15.654199  282781 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:15.654267  282781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.664199  282781 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:15.664254  282781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.672856  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:15.672884  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:15.672938  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:15.672957  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:15.672855  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:15.672995  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:15.699685  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:15.699722  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:15.699741  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:15.699766  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:15.715022  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:15.749908  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:15.749948  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:16.655429  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:16.670290  282781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:16.693587  282781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:16.711778  282781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:35:16.725821  282781 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:16.730355  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:16.740137  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:16.867916  282781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:16.883411  282781 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:35:16.883478  282781 certs.go:195] generating shared ca certs ...
	I1205 07:35:16.883521  282781 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:16.883711  282781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:16.883800  282781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:16.883837  282781 certs.go:257] generating profile certs ...
	I1205 07:35:16.883935  282781 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:35:16.883965  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt with IP's: []
	I1205 07:35:17.059440  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt ...
	I1205 07:35:17.059534  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt: {Name:mk4216fda7b2560e6bf3adab97ae3109b56cd861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.059812  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key ...
	I1205 07:35:17.059867  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key: {Name:mk6502f52b6a29fc92d89b24a9497a31259c0a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.061509  282781 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:35:17.061580  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:35:17.406723  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 ...
	I1205 07:35:17.406756  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8: {Name:mk48869d32b8a5be7389357c612f9688b7f98edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407538  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 ...
	I1205 07:35:17.407563  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8: {Name:mk39f9d896537098c3c994d4ce7924ee6a49dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407660  282781 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt
	I1205 07:35:17.407739  282781 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key
	I1205 07:35:17.407802  282781 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:35:17.407822  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt with IP's: []
	I1205 07:35:17.656775  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt ...
	I1205 07:35:17.656814  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt: {Name:mkf75c55fc25a5343874cbc403686708a7f26c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657007  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key ...
	I1205 07:35:17.657024  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key: {Name:mk9047fe05ee73b34ef5e42f150f28bde6ac00b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657241  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:17.657291  282781 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:17.657303  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:17.657332  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:17.657363  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:17.657390  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:17.657440  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:17.658030  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:17.677123  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:17.695559  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:17.713701  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:17.731347  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:17.749295  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:17.766915  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:17.783871  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:17.801244  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:17.819265  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:17.836390  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:17.860517  282781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:17.875166  282781 ssh_runner.go:195] Run: openssl version
	I1205 07:35:17.882955  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.891095  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:17.899082  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903708  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903782  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.945497  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.952956  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.960147  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.967438  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:17.974447  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.977974  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.978088  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:18.019263  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:18.027845  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:18.036126  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.044084  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:18.052338  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056629  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056703  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.099363  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:18.107989  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:18.116260  282781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:18.120762  282781 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:18.120819  282781 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:18.120900  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:18.120961  282781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:18.149219  282781 cri.go:89] found id: ""
	I1205 07:35:18.149296  282781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:18.159871  282781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:18.168276  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:18.168340  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:18.176150  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:18.176181  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:18.176234  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:18.184056  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:18.184125  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:18.191302  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:18.198850  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:18.198918  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:18.206439  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.213847  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:18.213913  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.220993  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:18.228433  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:18.228548  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:18.235813  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:18.359095  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:35:18.359647  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:35:18.423544  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:39:22.111676  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:39:22.111715  282781 kubeadm.go:319] 
	I1205 07:39:22.111850  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:39:22.120229  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.120296  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.120393  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.120460  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.120499  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.120549  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.120597  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.120654  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.120706  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.120774  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.120826  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.120871  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.120918  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.120970  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.121046  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.121144  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.121260  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.121329  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.122793  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.122965  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.123105  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.123184  282781 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:39:22.123243  282781 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:39:22.123304  282781 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:39:22.123355  282781 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:39:22.123409  282781 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:39:22.123531  282781 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123598  282781 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:39:22.123723  282781 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123789  282781 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:39:22.123857  282781 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:39:22.123902  282781 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:39:22.123959  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:22.124010  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:22.124072  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:22.124127  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:22.124191  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:22.124251  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:22.124334  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:22.124401  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:22.125727  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:22.125831  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:22.125912  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:22.125982  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:22.126088  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:22.126182  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:22.126289  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:22.126374  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:22.126419  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:22.126558  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:22.126665  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.126733  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000670148s
	I1205 07:39:22.126738  282781 kubeadm.go:319] 
	I1205 07:39:22.126805  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:22.126840  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:22.126951  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:22.126955  282781 kubeadm.go:319] 
	I1205 07:39:22.127067  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:22.127100  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:22.127131  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 07:39:22.127242  282781 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000670148s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000670148s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:22.127337  282781 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:22.127648  282781 kubeadm.go:319] 
	I1205 07:39:22.555931  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:22.571474  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:22.571542  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:22.579138  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:22.579159  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:22.579236  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:22.586998  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:22.587095  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:22.597974  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:22.612071  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:22.612169  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:22.620438  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.629905  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:22.629992  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.637890  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:22.646753  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:22.646849  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:22.655118  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:22.694938  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.695040  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.766969  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.767067  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.767130  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.767228  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.767293  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.767344  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.767408  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.767460  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.767518  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.767564  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.767626  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.767685  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.833955  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.834079  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.834176  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.845649  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.848548  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.848634  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.848703  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.848782  282781 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:22.848843  282781 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:22.848912  282781 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:22.848966  282781 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:22.849031  282781 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:22.849092  282781 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:22.849211  282781 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:22.849285  282781 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:22.849326  282781 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:22.849379  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:23.141457  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:23.628614  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:24.042217  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:24.241513  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:24.738880  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:24.739414  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:24.742365  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:24.744249  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:24.744385  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:24.744476  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:24.746446  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:24.766106  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:24.766217  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:24.773547  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:24.773863  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:24.773913  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:24.911724  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:24.911843  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:43:24.912154  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000692536s
	I1205 07:43:24.912179  282781 kubeadm.go:319] 
	I1205 07:43:24.912237  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:24.912269  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:24.912374  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:24.912378  282781 kubeadm.go:319] 
	I1205 07:43:24.912483  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:24.912515  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:24.912545  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:24.912549  282781 kubeadm.go:319] 
	I1205 07:43:24.918373  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:24.918871  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:24.919001  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:24.919288  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:24.919298  282781 kubeadm.go:319] 
	I1205 07:43:24.919374  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:24.919431  282781 kubeadm.go:403] duration metric: took 8m6.798617744s to StartCluster
	I1205 07:43:24.919465  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:24.919523  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:24.960538  282781 cri.go:89] found id: ""
	I1205 07:43:24.960597  282781 logs.go:282] 0 containers: []
	W1205 07:43:24.960612  282781 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:24.960628  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:24.960720  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:25.008615  282781 cri.go:89] found id: ""
	I1205 07:43:25.008645  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.008654  282781 logs.go:284] No container was found matching "etcd"
	I1205 07:43:25.008660  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:25.008731  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:25.051444  282781 cri.go:89] found id: ""
	I1205 07:43:25.051465  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.051473  282781 logs.go:284] No container was found matching "coredns"
	I1205 07:43:25.051479  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:25.051537  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:25.082467  282781 cri.go:89] found id: ""
	I1205 07:43:25.082489  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.082555  282781 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:25.082563  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:25.082640  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:25.147881  282781 cri.go:89] found id: ""
	I1205 07:43:25.147902  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.147911  282781 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:25.147917  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:25.147976  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:25.224329  282781 cri.go:89] found id: ""
	I1205 07:43:25.224361  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.224370  282781 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:25.224378  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:25.224434  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:25.250842  282781 cri.go:89] found id: ""
	I1205 07:43:25.250870  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.250879  282781 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:25.250889  282781 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:25.250901  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:25.319837  282781 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:25.312291    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.313007    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314611    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314898    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.316383    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:25.312291    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.313007    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314611    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314898    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.316383    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:25.319857  282781 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:25.319870  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:25.371742  282781 logs.go:123] Gathering logs for container status ...
	I1205 07:43:25.371978  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:43:25.409796  282781 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:25.409818  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:25.474308  282781 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:25.474345  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:43:25.487408  282781 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:25.487510  282781 out.go:285] * 
	* 
	W1205 07:43:25.487601  282781 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:25.487658  282781 out.go:285] * 
	* 
	W1205 07:43:25.490185  282781 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:25.493272  282781 out.go:203] 
	W1205 07:43:25.494648  282781 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:25.494700  282781 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:25.494737  282781 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:25.496566  282781 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-622440
helpers_test.go:243: (dbg) docker inspect newest-cni-622440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	        "Created": "2025-12-05T07:34:55.965403434Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:34:56.049476512Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hosts",
	        "LogPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4-json.log",
	        "Name": "/newest-cni-622440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-622440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-622440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	                "LowerDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-622440",
	                "Source": "/var/lib/docker/volumes/newest-cni-622440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-622440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-622440",
	                "name.minikube.sigs.k8s.io": "newest-cni-622440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3f2de86e0a6b922a19395ef639278ce284c7b00e34a68ffb9832a027d78cfb2",
	            "SandboxKey": "/var/run/docker/netns/c3f2de86e0a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-622440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:ac:81:a7:80:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "96c6294e00fc4b96dda84202da479b822dd69419748060a344f1800d21559cfe",
	                    "EndpointID": "a946bb977c5c9cfd0a36319812e5cea73d907a080ae56fc86cef3fb8982f4b72",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-622440",
	                        "9420074472d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440: exit status 6 (364.28462ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:43:26.025122  295044 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-622440 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ delete  │ -p cert-expiration-379442                                                                                                                                                                                                                                  │ cert-expiration-379442       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:31 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-083143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p default-k8s-diff-port-083143 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-861489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p embed-certs-861489 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:34:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:34:54.564320  282781 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:34:54.564546  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564575  282781 out.go:374] Setting ErrFile to fd 2...
	I1205 07:34:54.564598  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564902  282781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:34:54.565440  282781 out.go:368] Setting JSON to false
	I1205 07:34:54.566401  282781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8241,"bootTime":1764911853,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:34:54.566509  282781 start.go:143] virtualization:  
	I1205 07:34:54.570672  282781 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:34:54.575010  282781 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:34:54.575073  282781 notify.go:221] Checking for updates...
	I1205 07:34:54.579441  282781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:34:54.582467  282781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:34:54.587377  282781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:34:54.590331  282781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:34:54.593234  282781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:34:54.596734  282781 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:54.596829  282781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:34:54.638746  282781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:34:54.638881  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.723110  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-05 07:34:54.71373112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.723208  282781 docker.go:319] overlay module found
	I1205 07:34:54.726530  282781 out.go:179] * Using the docker driver based on user configuration
	I1205 07:34:54.729826  282781 start.go:309] selected driver: docker
	I1205 07:34:54.729851  282781 start.go:927] validating driver "docker" against <nil>
	I1205 07:34:54.729865  282781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:34:54.730603  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.814061  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:34:54.80392623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.814216  282781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1205 07:34:54.814233  282781 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 07:34:54.814448  282781 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:34:54.817656  282781 out.go:179] * Using Docker driver with root privileges
	I1205 07:34:54.820449  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:34:54.820517  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:34:54.820533  282781 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:34:54.820632  282781 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:34:54.823652  282781 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:34:54.826400  282781 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:34:54.829321  282781 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:34:54.832159  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:54.832346  282781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:34:54.866220  282781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:34:54.866240  282781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:34:54.905418  282781 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:34:55.127272  282781 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:34:55.127472  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:34:55.127510  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json: {Name:mk199da181ecffa13d15cfa2c7c654b0a370d7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:55.127517  282781 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127770  282781 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127814  282781 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127984  282781 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128114  282781 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128248  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:34:55.128265  282781 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 153.635µs
	I1205 07:34:55.128280  282781 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128249  282781 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128370  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:34:55.128400  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:34:55.128415  282781 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 907.013µs
	I1205 07:34:55.128428  282781 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:34:55.128407  282781 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 179.719µs
	I1205 07:34:55.128464  282781 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128383  282781 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:34:55.128510  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:34:55.128522  282781 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 71.566µs
	I1205 07:34:55.128528  282781 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:34:55.128441  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:34:55.128638  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:34:55.128687  282781 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 705.903µs
	I1205 07:34:55.128729  282781 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:34:55.128474  282781 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:34:55.128644  282781 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 879.419µs
	I1205 07:34:55.128808  282781 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128298  282781 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128601  282781 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128935  282781 start.go:364] duration metric: took 65.568µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:34:55.128666  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:34:55.128988  282781 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 1.179238ms
	I1205 07:34:55.129009  282781 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128849  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:34:55.129040  282781 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 743.557µs
	I1205 07:34:55.129066  282781 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:34:55.129099  282781 cache.go:87] Successfully saved all images to host disk.
	I1205 07:34:55.128980  282781 start.go:93] Provisioning new machine with config: &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:34:55.129144  282781 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:34:51.482132  281419 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:51.482359  281419 start.go:159] libmachine.API.Create for "no-preload-241270" (driver="docker")
	I1205 07:34:51.482388  281419 client.go:173] LocalClient.Create starting
	I1205 07:34:51.482463  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:51.482494  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482510  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482565  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:51.482581  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482597  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482961  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:51.498656  281419 cli_runner.go:211] docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:51.498737  281419 network_create.go:284] running [docker network inspect no-preload-241270] to gather additional debugging logs...
	I1205 07:34:51.498754  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270
	W1205 07:34:51.515396  281419 cli_runner.go:211] docker network inspect no-preload-241270 returned with exit code 1
	I1205 07:34:51.515424  281419 network_create.go:287] error running [docker network inspect no-preload-241270]: docker network inspect no-preload-241270: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-241270 not found
	I1205 07:34:51.515453  281419 network_create.go:289] output of [docker network inspect no-preload-241270]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-241270 not found
	
	** /stderr **
	I1205 07:34:51.515547  281419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:51.540706  281419 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:51.541027  281419 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:51.541392  281419 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:51.541780  281419 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3e30}
	I1205 07:34:51.541797  281419 network_create.go:124] attempt to create docker network no-preload-241270 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1205 07:34:51.541855  281419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-241270 no-preload-241270
	I1205 07:34:51.644579  281419 network_create.go:108] docker network no-preload-241270 192.168.76.0/24 created
	I1205 07:34:51.644609  281419 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-241270" container
	I1205 07:34:51.644693  281419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:51.664403  281419 cli_runner.go:164] Run: docker volume create no-preload-241270 --label name.minikube.sigs.k8s.io=no-preload-241270 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:51.703596  281419 oci.go:103] Successfully created a docker volume no-preload-241270
	I1205 07:34:51.703699  281419 cli_runner.go:164] Run: docker run --rm --name no-preload-241270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --entrypoint /usr/bin/test -v no-preload-241270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:52.419093  281419 oci.go:107] Successfully prepared a docker volume no-preload-241270
	I1205 07:34:52.419152  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:52.419281  281419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:52.419402  281419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:52.474323  281419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-241270 --name no-preload-241270 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-241270 --network no-preload-241270 --ip 192.168.76.2 --volume no-preload-241270:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:52.844284  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Running}}
	I1205 07:34:52.871353  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:52.893044  281419 cli_runner.go:164] Run: docker exec no-preload-241270 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:52.971944  281419 oci.go:144] the created container "no-preload-241270" has a running status.
	I1205 07:34:52.971975  281419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa...
	I1205 07:34:53.768668  281419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:53.945530  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:53.965986  281419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:53.966005  281419 kic_runner.go:114] Args: [docker exec --privileged no-preload-241270 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:54.059371  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:54.108271  281419 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:54.108367  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.132985  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.133345  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.133356  281419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:54.333364  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.333388  281419 ubuntu.go:182] provisioning hostname "no-preload-241270"
	I1205 07:34:54.333541  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.369719  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.371863  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.371893  281419 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-241270 && echo "no-preload-241270" | sudo tee /etc/hostname
	I1205 07:34:54.574524  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.574606  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.599195  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.599492  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.599509  281419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241270/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:34:54.776549  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:34:54.776662  281419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:34:54.776695  281419 ubuntu.go:190] setting up certificates
	I1205 07:34:54.776705  281419 provision.go:84] configureAuth start
	I1205 07:34:54.776772  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:54.802455  281419 provision.go:143] copyHostCerts
	I1205 07:34:54.802525  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:34:54.802534  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:34:54.802614  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:34:54.802700  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:34:54.802706  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:34:54.802735  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:34:54.802784  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:34:54.802797  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:34:54.802821  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:34:54.802868  281419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.no-preload-241270 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-241270]
	I1205 07:34:55.021879  281419 provision.go:177] copyRemoteCerts
	I1205 07:34:55.021961  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:34:55.022007  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.042198  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.146207  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:34:55.175055  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:34:55.196310  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:34:55.228238  281419 provision.go:87] duration metric: took 451.519136ms to configureAuth
	I1205 07:34:55.228267  281419 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:34:55.228447  281419 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:55.228461  281419 machine.go:97] duration metric: took 1.120172831s to provisionDockerMachine
	I1205 07:34:55.228468  281419 client.go:176] duration metric: took 3.746074827s to LocalClient.Create
	I1205 07:34:55.228481  281419 start.go:167] duration metric: took 3.746124256s to libmachine.API.Create "no-preload-241270"
	I1205 07:34:55.228492  281419 start.go:293] postStartSetup for "no-preload-241270" (driver="docker")
	I1205 07:34:55.228503  281419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:34:55.228562  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:34:55.228610  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.249980  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.367085  281419 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:34:55.370694  281419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:34:55.370723  281419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:34:55.370734  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:34:55.370886  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:34:55.371031  281419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:34:55.371195  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:34:55.385389  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:34:55.415204  281419 start.go:296] duration metric: took 186.696466ms for postStartSetup
	I1205 07:34:55.415546  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.445124  281419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:34:55.445421  281419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:34:55.445469  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.465824  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.582588  281419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:34:55.589753  281419 start.go:128] duration metric: took 4.113009855s to createHost
	I1205 07:34:55.589783  281419 start.go:83] releasing machines lock for "no-preload-241270", held for 4.11313674s
	I1205 07:34:55.589860  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.609280  281419 ssh_runner.go:195] Run: cat /version.json
	I1205 07:34:55.609334  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.609553  281419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:34:55.609603  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.653271  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.667026  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.785816  281419 ssh_runner.go:195] Run: systemctl --version
	I1205 07:34:55.905848  281419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:34:55.913263  281419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:34:55.913352  281419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:34:55.955688  281419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:34:55.955713  281419 start.go:496] detecting cgroup driver to use...
	I1205 07:34:55.955752  281419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:34:55.955807  281419 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:34:55.978957  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:34:55.992668  281419 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:34:55.992774  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:34:56.017505  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:34:56.046827  281419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:34:56.209514  281419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:34:56.405533  281419 docker.go:234] disabling docker service ...
	I1205 07:34:56.405600  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:34:56.470263  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:34:56.503296  281419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:34:56.815584  281419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:34:57.031532  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:34:57.059667  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:34:57.093975  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:34:57.103230  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:34:57.112469  281419 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:34:57.112537  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:34:57.123144  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.134066  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:34:57.144317  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.156950  281419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:34:57.168939  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:34:57.179688  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:34:57.190637  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:34:57.206793  281419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:34:57.215781  281419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:34:57.226983  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:34:57.420977  281419 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:34:57.514033  281419 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:34:57.514159  281419 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:34:57.519057  281419 start.go:564] Will wait 60s for crictl version
	I1205 07:34:57.519141  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:57.523352  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:34:57.554146  281419 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:34:57.554218  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.577679  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.608177  281419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:34:55.134539  282781 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:55.134871  282781 start.go:159] libmachine.API.Create for "newest-cni-622440" (driver="docker")
	I1205 07:34:55.134936  282781 client.go:173] LocalClient.Create starting
	I1205 07:34:55.135040  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:55.135104  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135129  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135215  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:55.135272  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135292  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135778  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:55.152795  282781 cli_runner.go:211] docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:55.152912  282781 network_create.go:284] running [docker network inspect newest-cni-622440] to gather additional debugging logs...
	I1205 07:34:55.152946  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440
	W1205 07:34:55.170809  282781 cli_runner.go:211] docker network inspect newest-cni-622440 returned with exit code 1
	I1205 07:34:55.170837  282781 network_create.go:287] error running [docker network inspect newest-cni-622440]: docker network inspect newest-cni-622440: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-622440 not found
	I1205 07:34:55.170850  282781 network_create.go:289] output of [docker network inspect newest-cni-622440]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-622440 not found
	
	** /stderr **
	I1205 07:34:55.170942  282781 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:55.190601  282781 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:55.190913  282781 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:55.191232  282781 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:55.191506  282781 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-509cbc0434c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:5b:c8:fd:a0:2d} reservation:<nil>}
	I1205 07:34:55.191883  282781 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ab4b80}
	I1205 07:34:55.191903  282781 network_create.go:124] attempt to create docker network newest-cni-622440 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:34:55.191967  282781 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-622440 newest-cni-622440
	I1205 07:34:55.272466  282781 network_create.go:108] docker network newest-cni-622440 192.168.85.0/24 created
	I1205 07:34:55.272497  282781 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-622440" container
	I1205 07:34:55.272584  282781 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:55.299615  282781 cli_runner.go:164] Run: docker volume create newest-cni-622440 --label name.minikube.sigs.k8s.io=newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:55.321227  282781 oci.go:103] Successfully created a docker volume newest-cni-622440
	I1205 07:34:55.321330  282781 cli_runner.go:164] Run: docker run --rm --name newest-cni-622440-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --entrypoint /usr/bin/test -v newest-cni-622440:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:55.874194  282781 oci.go:107] Successfully prepared a docker volume newest-cni-622440
	I1205 07:34:55.874264  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:55.874410  282781 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:55.874535  282781 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:55.945833  282781 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-622440 --name newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-622440 --network newest-cni-622440 --ip 192.168.85.2 --volume newest-cni-622440:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:56.334301  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Running}}
	I1205 07:34:56.365095  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.392463  282781 cli_runner.go:164] Run: docker exec newest-cni-622440 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:56.460482  282781 oci.go:144] the created container "newest-cni-622440" has a running status.
	I1205 07:34:56.460517  282781 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa...
	I1205 07:34:56.767833  282781 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:56.791395  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.811902  282781 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:56.811920  282781 kic_runner.go:114] Args: [docker exec --privileged newest-cni-622440 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:56.902529  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.932575  282781 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:56.932686  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:34:56.953532  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:56.953863  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:34:56.953871  282781 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:56.954513  282781 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43638->127.0.0.1:33093: read: connection reset by peer
	I1205 07:34:57.611218  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:57.631313  281419 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:34:57.635595  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:34:57.647819  281419 kubeadm.go:884] updating cluster {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:34:57.647943  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:57.648012  281419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:34:57.675975  281419 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:34:57.675998  281419 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:34:57.676035  281419 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.676242  281419 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.676321  281419 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.676541  281419 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.676664  281419 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.676744  281419 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.676821  281419 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.677443  281419 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.678747  281419 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.679204  281419 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.679446  281419 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.679490  281419 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.679628  281419 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.679730  281419 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.680191  281419 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.680226  281419 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.993134  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:34:57.993255  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:34:58.022857  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:34:58.022958  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.035702  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:34:58.035816  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.068460  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:34:58.068586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.069026  281419 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:34:58.069090  281419 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:34:58.069183  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.069262  281419 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:34:58.069305  281419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.069349  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.074525  281419 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:34:58.074618  281419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.074694  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.084602  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:34:58.084753  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.093856  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:34:58.093981  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.103085  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.103156  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.103215  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.103214  281419 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:34:58.103271  281419 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.103296  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.115763  281419 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:34:58.115803  281419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.115854  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.116104  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:34:58.116140  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.154653  281419 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:34:58.154740  281419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.154818  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192178  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.192267  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.192272  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.192322  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.192364  281419 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:34:58.192395  281419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.192421  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192479  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.192482  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278470  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.278568  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.278766  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.278598  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.278641  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278681  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.278865  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387623  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387705  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.387774  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:34:58.387840  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.387886  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.387626  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387984  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.388070  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.387990  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387931  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.453644  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.453792  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:34:58.453804  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:34:58.453889  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 07:34:58.453762  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:34:58.453990  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.454049  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:34:58.454052  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:34:58.453951  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.453861  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:34:58.454295  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:34:58.453742  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.454372  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.542254  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.542568  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:34:58.542480  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:34:58.542630  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:34:58.542522  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542738  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:34:58.542768  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:34:58.578716  281419 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.578827  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.610540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.610912  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:34:58.888566  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:34:59.021211  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:59.021289  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1205 07:34:59.068346  281419 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:34:59.068498  281419 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:34:59.068572  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864558  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.795954788s)
	I1205 07:35:00.864602  281419 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:00.864631  281419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864683  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:35:00.864739  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.843433798s)
	I1205 07:35:00.864752  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:00.864766  281419 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.864805  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.873580  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.270776  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.270817  282781 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:35:00.270899  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.371937  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.372299  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.372312  282781 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:35:00.613599  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.613706  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.642684  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.643012  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.643028  282781 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:35:00.802014  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:35:00.802045  282781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:35:00.802091  282781 ubuntu.go:190] setting up certificates
	I1205 07:35:00.802110  282781 provision.go:84] configureAuth start
	I1205 07:35:00.802183  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:00.827426  282781 provision.go:143] copyHostCerts
	I1205 07:35:00.827511  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:35:00.827525  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:35:00.827605  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:35:00.827724  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:35:00.827738  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:35:00.827769  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:35:00.827834  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:35:00.827844  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:35:00.827871  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:35:00.827926  282781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:35:00.956019  282781 provision.go:177] copyRemoteCerts
	I1205 07:35:00.956232  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:35:00.956312  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.978988  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.089461  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:35:01.114938  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:35:01.142325  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:35:01.168254  282781 provision.go:87] duration metric: took 366.116888ms to configureAuth
	I1205 07:35:01.168340  282781 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:35:01.168591  282781 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:35:01.168634  282781 machine.go:97] duration metric: took 4.236039989s to provisionDockerMachine
	I1205 07:35:01.168665  282781 client.go:176] duration metric: took 6.033716203s to LocalClient.Create
	I1205 07:35:01.168718  282781 start.go:167] duration metric: took 6.033833045s to libmachine.API.Create "newest-cni-622440"
	I1205 07:35:01.168742  282781 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:35:01.168766  282781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:35:01.168850  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:35:01.168915  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.192294  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.311598  282781 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:35:01.315486  282781 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:35:01.315516  282781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:35:01.315528  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:35:01.315596  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:35:01.315698  282781 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:35:01.315872  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:35:01.326201  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:01.345964  282781 start.go:296] duration metric: took 177.196121ms for postStartSetup
	I1205 07:35:01.346371  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.368578  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:35:01.369047  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:35:01.369150  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.391110  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.495164  282781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:35:01.500376  282781 start.go:128] duration metric: took 6.371211814s to createHost
	I1205 07:35:01.500460  282781 start.go:83] releasing machines lock for "newest-cni-622440", held for 6.371509385s
	I1205 07:35:01.500553  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.520704  282781 ssh_runner.go:195] Run: cat /version.json
	I1205 07:35:01.520755  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.520758  282781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:35:01.520826  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.542832  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.554863  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.750909  282781 ssh_runner.go:195] Run: systemctl --version
	I1205 07:35:01.758230  282781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:35:01.763670  282781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:35:01.763742  282781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:35:01.797683  282781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:35:01.797709  282781 start.go:496] detecting cgroup driver to use...
	I1205 07:35:01.797743  282781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:35:01.797800  282781 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:35:01.813916  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:35:01.835990  282781 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:35:01.836078  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:35:01.856191  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:35:01.879473  282781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:35:02.016063  282781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:35:02.186714  282781 docker.go:234] disabling docker service ...
	I1205 07:35:02.186836  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:35:02.211433  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:35:02.226230  282781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:35:02.421061  282781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:35:02.574247  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:35:02.588525  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:35:02.604182  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:35:02.613394  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:35:02.623017  282781 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:35:02.623089  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:35:02.632544  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.643699  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:35:02.656090  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.667307  282781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:35:02.675494  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:35:02.685933  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:35:02.697515  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:35:02.708706  282781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:35:02.723371  282781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:35:02.736002  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:02.875115  282781 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:35:02.963803  282781 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:35:02.963902  282781 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:35:02.970220  282781 start.go:564] Will wait 60s for crictl version
	I1205 07:35:02.970310  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:02.974813  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:35:03.021266  282781 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:35:03.021367  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.047120  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.073256  282781 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:35:03.076375  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:35:03.098294  282781 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:35:03.105202  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:03.120382  282781 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:35:03.123255  282781 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:35:03.123408  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:35:03.123487  282781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:35:03.154394  282781 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:35:03.154422  282781 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:35:03.154478  282781 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.154682  282781 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.154778  282781 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.154866  282781 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.154957  282781 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.155040  282781 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.155127  282781 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.155218  282781 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.156724  282781 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.157068  282781 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.157467  282781 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.157620  282781 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.157862  282781 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.158016  282781 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.158145  282781 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.158257  282781 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.462330  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:35:03.462445  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.474342  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:35:03.474456  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.482905  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:35:03.483018  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:35:03.493712  282781 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:35:03.493818  282781 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.493879  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.495878  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:35:03.495977  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.503824  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:35:03.503953  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.548802  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:35:03.548918  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.563856  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:35:03.563966  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.564379  282781 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:35:03.564443  282781 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.564494  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564588  282781 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:35:03.564625  282781 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.564664  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564745  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.577731  282781 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:35:03.577812  282781 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.577873  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.594067  282781 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:35:03.594158  282781 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.594222  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.638413  282781 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:35:03.638520  282781 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.638583  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.647984  282781 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:35:03.648065  282781 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.648135  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.654578  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.654695  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.654792  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.654879  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.654956  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.659132  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.660475  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856118  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.856393  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.856229  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.856314  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.856258  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.856389  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856356  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073452  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073547  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:04.073616  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:04.073671  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:04.073727  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:35:04.073796  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:04.073863  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:04.073966  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:04.231226  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231396  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231480  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231559  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231639  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231719  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231791  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:35:04.231868  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:04.231943  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232023  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232099  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:35:04.232145  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:35:04.232313  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:35:04.232384  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:04.287892  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:35:04.287988  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288174  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:35:04.288039  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288247  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:35:04.288060  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288276  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:35:04.288078  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:35:04.288307  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:35:04.288093  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288348  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:35:04.288138  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1205 07:35:04.301689  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301786  282781 retry.go:31] will retry after 208.795928ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301815  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301854  282781 retry.go:31] will retry after 334.580121ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301882  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301902  282781 retry.go:31] will retry after 333.510577ms: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.510761  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.553911  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:02.712615  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.847781055s)
	I1205 07:35:02.712638  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:02.712660  281419 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712732  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712799  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.839195579s)
	I1205 07:35:02.712834  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087126  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.374270081s)
	I1205 07:35:04.087198  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087256  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.374512799s)
	I1205 07:35:04.087266  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:04.087283  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.087309  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:05.800879  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.713547867s)
	I1205 07:35:05.800904  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:05.800922  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.800970  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.801018  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.713803361s)
	I1205 07:35:05.801061  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:05.801141  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1205 07:35:04.593101  282781 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:35:04.593340  282781 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:35:04.593425  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.593492  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.620265  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.635918  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.637258  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.700758  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.710820  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.947887  282781 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:04.947982  282781 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.948060  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:05.052764  282781 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.052875  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.108225  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550590  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550699  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:35:05.550751  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:05.550805  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:07.127585  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.576745452s)
	I1205 07:35:07.127612  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:07.127630  282781 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127690  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127752  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.577096553s)
	I1205 07:35:07.127791  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:08.530003  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.40218711s)
	I1205 07:35:08.530052  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:08.530145  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.530206  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.402499844s)
	I1205 07:35:08.530219  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:08.530234  282781 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:08.530258  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:07.217489  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.416494396s)
	I1205 07:35:07.217512  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:07.217529  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217647  281419 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.416497334s)
	I1205 07:35:07.217660  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:07.217673  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:08.607664  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.390055936s)
	I1205 07:35:08.607697  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:08.607718  281419 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.607767  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:09.100321  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:09.100358  281419 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:09.100365  281419 cache_images.go:94] duration metric: took 11.42435306s to LoadCachedImages
	I1205 07:35:09.100377  281419 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:09.100482  281419 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-241270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:09.100558  281419 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:09.129301  281419 cni.go:84] Creating CNI manager for ""
	I1205 07:35:09.129326  281419 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:09.129345  281419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:35:09.129377  281419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241270 NodeName:no-preload-241270 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:09.129497  281419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-241270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:09.129569  281419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.142095  281419 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:09.142170  281419 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.156065  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:09.156176  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:09.156262  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:09.156299  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:09.156377  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:09.156425  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:09.179830  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:09.179870  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:09.179956  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:09.179975  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:09.180072  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:09.198397  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:09.198485  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:10.286113  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:10.299161  281419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:10.316251  281419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:10.331159  281419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 07:35:10.345735  281419 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:10.350335  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:10.363402  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:10.512811  281419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:10.529558  281419 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270 for IP: 192.168.76.2
	I1205 07:35:10.529629  281419 certs.go:195] generating shared ca certs ...
	I1205 07:35:10.529657  281419 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.529834  281419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:10.529923  281419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:10.529958  281419 certs.go:257] generating profile certs ...
	I1205 07:35:10.530038  281419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key
	I1205 07:35:10.530076  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt with IP's: []
	I1205 07:35:10.853605  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt ...
	I1205 07:35:10.853638  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt: {Name:mk2a843840c6e4a2de14fc26103351bbaff83f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.854971  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key ...
	I1205 07:35:10.854994  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key: {Name:mk2141bc22495cb299c026ddfd70c2cab1c5df09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.855117  281419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330
	I1205 07:35:10.855143  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1205 07:35:11.172976  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 ...
	I1205 07:35:11.173007  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330: {Name:mk727b4727c68f439905180851e5f305719107ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.173862  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 ...
	I1205 07:35:11.173894  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330: {Name:mk05e994b799e7321fe9fd9419571307eec1a124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.174674  281419 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt
	I1205 07:35:11.174770  281419 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key
	I1205 07:35:11.174852  281419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key
	I1205 07:35:11.174872  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt with IP's: []
	I1205 07:35:11.350910  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt ...
	I1205 07:35:11.350948  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt: {Name:mk7c9be3a839b00f099d02f39817919630f828cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.352352  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key ...
	I1205 07:35:11.352386  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key: {Name:mkf516ee46be6e2698cf5a62147058f957abc08a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.353684  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:11.353744  281419 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:11.353758  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:11.353787  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:11.353817  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:11.353849  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:11.353898  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:11.354490  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:11.381382  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:11.406241  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:11.428183  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:11.450978  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:11.476407  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:11.498851  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:11.519352  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:11.539765  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:11.559484  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:11.579911  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:11.600685  281419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:11.616084  281419 ssh_runner.go:195] Run: openssl version
	I1205 07:35:11.625728  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.635065  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:11.645233  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651040  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651153  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.693810  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.702555  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.710996  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.719477  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:11.727857  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732743  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732862  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.774767  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:11.783345  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:11.791961  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.801063  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:11.809888  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.814918  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.815034  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.857224  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:11.866093  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:11.874706  281419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:11.879598  281419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:11.879697  281419 kubeadm.go:401] StartCluster: {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:11.879803  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:11.879898  281419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:11.908036  281419 cri.go:89] found id: ""
	I1205 07:35:11.908156  281419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:11.919349  281419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:11.928155  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:11.928267  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:11.939709  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:11.939779  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:11.939856  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:11.949257  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:11.949365  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:11.957760  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:11.967055  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:11.967163  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:11.975295  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.984686  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:11.984797  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.994202  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:12.005520  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:12.005606  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:12.026031  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:12.083192  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:35:12.083309  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:35:12.193051  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:35:12.193150  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:35:12.193215  281419 kubeadm.go:319] OS: Linux
	I1205 07:35:12.193261  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:35:12.193313  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:35:12.193374  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:35:12.193426  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:35:12.193479  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:35:12.193529  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:35:12.193578  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:35:12.193684  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:35:12.193786  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:35:12.268365  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:35:12.268486  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:35:12.268582  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:35:12.276338  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:35:10.757563  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.227284144s)
	I1205 07:35:10.757586  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:10.757606  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757654  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757716  282781 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.227556574s)
	I1205 07:35:10.757730  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:10.757745  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:12.017290  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.259613359s)
	I1205 07:35:12.017315  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:12.017333  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:12.017393  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:13.470638  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.453225657s)
	I1205 07:35:13.470663  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:13.470680  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:13.470727  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:12.281185  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:35:12.281356  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:35:12.281459  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:35:12.381667  281419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:35:12.863385  281419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:35:13.114787  281419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:35:13.312565  281419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:35:13.794303  281419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:35:13.794935  281419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.299804  281419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:35:14.300371  281419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.449360  281419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:35:14.671722  281419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:35:15.172052  281419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:35:15.174002  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:35:15.463292  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:35:16.096919  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:35:16.336520  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:35:16.828502  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:35:17.109506  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:35:17.109613  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:35:17.109687  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:35:15.103687  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.632919174s)
	I1205 07:35:15.103711  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:15.103732  282781 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.103783  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.621241  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:15.621272  282781 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:15.621278  282781 cache_images.go:94] duration metric: took 12.466843247s to LoadCachedImages
	I1205 07:35:15.621292  282781 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:15.621381  282781 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:15.621444  282781 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:15.654017  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:35:15.654037  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:15.654053  282781 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:35:15.654081  282781 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:15.654199  282781 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:15.654267  282781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.664199  282781 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:15.664254  282781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.672856  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:15.672884  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:15.672938  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:15.672957  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:15.672855  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:15.672995  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:15.699685  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:15.699722  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:15.699741  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:15.699766  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:15.715022  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:15.749908  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:15.749948  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:16.655429  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:16.670290  282781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:16.693587  282781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:16.711778  282781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:35:16.725821  282781 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:16.730355  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:16.740137  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:16.867916  282781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:16.883411  282781 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:35:16.883478  282781 certs.go:195] generating shared ca certs ...
	I1205 07:35:16.883521  282781 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:16.883711  282781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:16.883800  282781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:16.883837  282781 certs.go:257] generating profile certs ...
	I1205 07:35:16.883935  282781 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:35:16.883965  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt with IP's: []
	I1205 07:35:17.059440  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt ...
	I1205 07:35:17.059534  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt: {Name:mk4216fda7b2560e6bf3adab97ae3109b56cd861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.059812  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key ...
	I1205 07:35:17.059867  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key: {Name:mk6502f52b6a29fc92d89b24a9497a31259c0a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.061509  282781 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:35:17.061580  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:35:17.406723  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 ...
	I1205 07:35:17.406756  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8: {Name:mk48869d32b8a5be7389357c612f9688b7f98edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407538  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 ...
	I1205 07:35:17.407563  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8: {Name:mk39f9d896537098c3c994d4ce7924ee6a49dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407660  282781 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt
	I1205 07:35:17.407739  282781 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key
	I1205 07:35:17.407802  282781 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:35:17.407822  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt with IP's: []
	I1205 07:35:17.656775  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt ...
	I1205 07:35:17.656814  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt: {Name:mkf75c55fc25a5343874cbc403686708a7f26c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657007  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key ...
	I1205 07:35:17.657024  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key: {Name:mk9047fe05ee73b34ef5e42f150f28bde6ac00b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657241  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:17.657291  282781 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:17.657303  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:17.657332  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:17.657363  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:17.657390  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:17.657440  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:17.658030  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:17.677123  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:17.695559  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:17.713701  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:17.731347  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:17.749295  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:17.766915  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:17.783871  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:17.801244  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:17.819265  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:17.836390  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:17.860517  282781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:17.875166  282781 ssh_runner.go:195] Run: openssl version
	I1205 07:35:17.882955  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.891095  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:17.899082  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903708  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903782  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.945497  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.952956  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.960147  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.967438  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:17.974447  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.977974  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.978088  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:18.019263  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:18.027845  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:18.036126  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.044084  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:18.052338  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056629  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056703  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.099363  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:18.107989  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:18.116260  282781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:18.120762  282781 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:18.120819  282781 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:18.120900  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:18.120961  282781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:18.149219  282781 cri.go:89] found id: ""
	I1205 07:35:18.149296  282781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:18.159871  282781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:18.168276  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:18.168340  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:18.176150  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:18.176181  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:18.176234  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:18.184056  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:18.184125  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:18.191302  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:18.198850  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:18.198918  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:18.206439  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.213847  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:18.213913  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.220993  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:18.228433  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:18.228548  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:18.235813  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:18.359095  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:35:18.359647  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:35:18.423544  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:35:17.113932  281419 out.go:252]   - Booting up control plane ...
	I1205 07:35:17.114055  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:35:17.130916  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:35:17.131000  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:35:17.144923  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:35:17.145031  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:35:17.153033  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:35:17.153136  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:35:17.153238  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:35:17.320155  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:35:17.320276  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:17.318333  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000477824s
	I1205 07:39:17.318360  281419 kubeadm.go:319] 
	I1205 07:39:17.318428  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:17.318462  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:17.318567  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:17.318571  281419 kubeadm.go:319] 
	I1205 07:39:17.318675  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:17.318708  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:17.318739  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:39:17.318744  281419 kubeadm.go:319] 
	I1205 07:39:17.323674  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:39:17.324139  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:39:17.324260  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:39:17.324546  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:39:17.324556  281419 kubeadm.go:319] 
	I1205 07:39:17.324629  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 07:39:17.324734  281419 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000477824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:17.324832  281419 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:17.734892  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:17.749336  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:17.749399  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:17.757730  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:17.757790  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:17.757850  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:17.766487  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:17.766564  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:17.774523  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:17.782748  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:17.782816  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:17.790744  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.798734  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:17.798821  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.806627  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:17.814519  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:17.814588  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:17.822487  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:17.863307  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:17.863481  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:17.933763  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:17.933840  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:17.933891  281419 kubeadm.go:319] OS: Linux
	I1205 07:39:17.933940  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:17.933992  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:17.934041  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:17.934092  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:17.934143  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:17.934200  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:17.934250  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:17.934300  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:17.934350  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:18.005121  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:18.005386  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:18.005505  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:18.013422  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:18.015372  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:18.015478  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:18.015552  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:18.015718  281419 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:18.016366  281419 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:18.016626  281419 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:18.017069  281419 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:18.017546  281419 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:18.017846  281419 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:18.018157  281419 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:18.018500  281419 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:18.018795  281419 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:18.018893  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:18.103696  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:18.482070  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:18.757043  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:18.907937  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:19.448057  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:19.448772  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:19.451764  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:19.453331  281419 out.go:252]   - Booting up control plane ...
	I1205 07:39:19.453502  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:19.453624  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:19.454383  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:19.477703  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:19.478043  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:19.486387  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:19.486517  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:19.486561  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:19.636438  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:19.636619  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.111676  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:39:22.111715  282781 kubeadm.go:319] 
	I1205 07:39:22.111850  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:39:22.120229  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.120296  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.120393  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.120460  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.120499  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.120549  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.120597  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.120654  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.120706  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.120774  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.120826  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.120871  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.120918  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.120970  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.121046  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.121144  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.121260  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.121329  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.122793  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.122965  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.123105  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.123184  282781 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:39:22.123243  282781 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:39:22.123304  282781 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:39:22.123355  282781 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:39:22.123409  282781 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:39:22.123531  282781 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123598  282781 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:39:22.123723  282781 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123789  282781 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:39:22.123857  282781 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:39:22.123902  282781 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:39:22.123959  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:22.124010  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:22.124072  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:22.124127  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:22.124191  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:22.124251  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:22.124334  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:22.124401  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:22.125727  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:22.125831  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:22.125912  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:22.125982  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:22.126088  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:22.126182  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:22.126289  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:22.126374  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:22.126419  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:22.126558  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:22.126665  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.126733  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000670148s
	I1205 07:39:22.126738  282781 kubeadm.go:319] 
	I1205 07:39:22.126805  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:22.126840  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:22.126951  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:22.126955  282781 kubeadm.go:319] 
	I1205 07:39:22.127067  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:22.127100  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:22.127131  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 07:39:22.127242  282781 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000670148s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:22.127337  282781 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:22.127648  282781 kubeadm.go:319] 
	I1205 07:39:22.555931  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:22.571474  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:22.571542  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:22.579138  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:22.579159  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:22.579236  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:22.586998  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:22.587095  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:22.597974  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:22.612071  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:22.612169  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:22.620438  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.629905  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:22.629992  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.637890  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:22.646753  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:22.646849  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:22.655118  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:22.694938  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.695040  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.766969  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.767067  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.767130  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.767228  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.767293  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.767344  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.767408  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.767460  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.767518  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.767564  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.767626  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.767685  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.833955  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.834079  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.834176  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.845649  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.848548  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.848634  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.848703  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.848782  282781 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:22.848843  282781 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:22.848912  282781 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:22.848966  282781 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:22.849031  282781 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:22.849092  282781 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:22.849211  282781 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:22.849285  282781 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:22.849326  282781 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:22.849379  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:23.141457  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:23.628614  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:24.042217  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:24.241513  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:24.738880  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:24.739414  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:24.742365  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:24.744249  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:24.744385  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:24.744476  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:24.746446  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:24.766106  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:24.766217  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:24.773547  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:24.773863  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:24.773913  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:24.911724  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:24.911843  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:43:19.629743  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000979602s
	I1205 07:43:19.629776  281419 kubeadm.go:319] 
	I1205 07:43:19.629841  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:19.629881  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:19.629992  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:19.630000  281419 kubeadm.go:319] 
	I1205 07:43:19.630105  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:19.630141  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:19.630176  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:19.630185  281419 kubeadm.go:319] 
	I1205 07:43:19.633703  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:19.634129  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:19.634243  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:19.634512  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:19.634521  281419 kubeadm.go:319] 
	I1205 07:43:19.634601  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:19.634654  281419 kubeadm.go:403] duration metric: took 8m7.754963643s to StartCluster
	I1205 07:43:19.634689  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:19.634770  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:19.664154  281419 cri.go:89] found id: ""
	I1205 07:43:19.664178  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.664186  281419 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:19.664194  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:19.664259  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:19.688943  281419 cri.go:89] found id: ""
	I1205 07:43:19.689027  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.689051  281419 logs.go:284] No container was found matching "etcd"
	I1205 07:43:19.689071  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:19.689145  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:19.714243  281419 cri.go:89] found id: ""
	I1205 07:43:19.714266  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.714278  281419 logs.go:284] No container was found matching "coredns"
	I1205 07:43:19.714285  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:19.714344  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:19.739300  281419 cri.go:89] found id: ""
	I1205 07:43:19.739326  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.739334  281419 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:19.739341  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:19.739409  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:19.764133  281419 cri.go:89] found id: ""
	I1205 07:43:19.764158  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.764168  281419 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:19.764174  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:19.764233  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:19.791591  281419 cri.go:89] found id: ""
	I1205 07:43:19.791655  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.791670  281419 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:19.791678  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:19.791736  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:19.817073  281419 cri.go:89] found id: ""
	I1205 07:43:19.817096  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.817104  281419 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:19.817113  281419 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:19.817124  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:19.884361  281419 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:19.886664  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:43:19.933532  281419 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:19.933565  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:20.000746  281419 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:20.000782  281419 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:20.000794  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:20.048127  281419 logs.go:123] Gathering logs for container status ...
	I1205 07:43:20.048164  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:43:20.079198  281419 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:20.079257  281419 out.go:285] * 
	W1205 07:43:20.079339  281419 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.079395  281419 out.go:285] * 
	W1205 07:43:20.081583  281419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:20.084896  281419 out.go:203] 
	W1205 07:43:20.086596  281419 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.086704  281419 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:20.086780  281419 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:20.088336  281419 out.go:203] 
	I1205 07:43:24.912154  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000692536s
	I1205 07:43:24.912179  282781 kubeadm.go:319] 
	I1205 07:43:24.912237  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:24.912269  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:24.912374  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:24.912378  282781 kubeadm.go:319] 
	I1205 07:43:24.912483  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:24.912515  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:24.912545  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:24.912549  282781 kubeadm.go:319] 
	I1205 07:43:24.918373  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:24.918871  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:24.919001  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:24.919288  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:24.919298  282781 kubeadm.go:319] 
	I1205 07:43:24.919374  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:24.919431  282781 kubeadm.go:403] duration metric: took 8m6.798617744s to StartCluster
	I1205 07:43:24.919465  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:24.919523  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:24.960538  282781 cri.go:89] found id: ""
	I1205 07:43:24.960597  282781 logs.go:282] 0 containers: []
	W1205 07:43:24.960612  282781 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:24.960628  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:24.960720  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:25.008615  282781 cri.go:89] found id: ""
	I1205 07:43:25.008645  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.008654  282781 logs.go:284] No container was found matching "etcd"
	I1205 07:43:25.008660  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:25.008731  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:25.051444  282781 cri.go:89] found id: ""
	I1205 07:43:25.051465  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.051473  282781 logs.go:284] No container was found matching "coredns"
	I1205 07:43:25.051479  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:25.051537  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:25.082467  282781 cri.go:89] found id: ""
	I1205 07:43:25.082489  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.082555  282781 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:25.082563  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:25.082640  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:25.147881  282781 cri.go:89] found id: ""
	I1205 07:43:25.147902  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.147911  282781 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:25.147917  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:25.147976  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:25.224329  282781 cri.go:89] found id: ""
	I1205 07:43:25.224361  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.224370  282781 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:25.224378  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:25.224434  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:25.250842  282781 cri.go:89] found id: ""
	I1205 07:43:25.250870  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.250879  282781 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:25.250889  282781 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:25.250901  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:25.319837  282781 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:25.312291    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.313007    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314611    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314898    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.316383    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:25.312291    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.313007    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314611    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314898    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.316383    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:25.319857  282781 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:25.319870  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:25.371742  282781 logs.go:123] Gathering logs for container status ...
	I1205 07:43:25.371978  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:43:25.409796  282781 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:25.409818  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:25.474308  282781 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:25.474345  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:43:25.487408  282781 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:25.487510  282781 out.go:285] * 
	W1205 07:43:25.487601  282781 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:25.487658  282781 out.go:285] * 
	W1205 07:43:25.490185  282781 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:25.493272  282781 out.go:203] 
	W1205 07:43:25.494648  282781 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:25.494700  282781 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:25.494737  282781 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:25.496566  282781 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:35:07 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:07.120012036Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:08.515002428Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 05 07:35:08 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:08.517354324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 05 07:35:08 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:08.532799530Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:08.533584975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:10 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:10.745868119Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 05 07:35:10 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:10.748396074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 05 07:35:10 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:10.767530947Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:10 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:10.768194782Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:12 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:12.006043536Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 05 07:35:12 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:12.008605838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 05 07:35:12 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:12.017778694Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:12 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:12.018958088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:13 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:13.461807200Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 05 07:35:13 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:13.464459398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 05 07:35:13 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:13.483967961Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:13 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:13.484870864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.041255724Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.061998057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.116644129Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.117620386Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.606398197Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.608651268Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.616593045Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.616937615Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:26.768904    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:26.769546    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:26.771389    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:26.771727    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:26.773223    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:43:26 up  2:25,  0 user,  load average: 0.95, 1.10, 1.71
	Linux newest-cni-622440 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:43:23 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:24 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 05 07:43:24 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:24 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:24 newest-cni-622440 kubelet[5438]: E1205 07:43:24.428424    5438 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:24 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:24 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:25 newest-cni-622440 kubelet[5481]: E1205 07:43:25.204031    5481 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:25 newest-cni-622440 kubelet[5530]: E1205 07:43:25.968704    5530 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:25 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:26 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 07:43:26 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:26 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:26 newest-cni-622440 kubelet[5611]: E1205 07:43:26.681257    5611 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:26 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:26 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440: exit status 6 (407.348708ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:43:27.428451  295293 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-622440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (512.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-241270 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-241270 create -f testdata/busybox.yaml: exit status 1 (61.934091ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-241270" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-241270 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-241270
helpers_test.go:243: (dbg) docker inspect no-preload-241270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	        "Created": "2025-12-05T07:34:52.488952391Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281858,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:34:52.549450094Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hosts",
	        "LogPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896-json.log",
	        "Name": "/no-preload-241270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	                "LowerDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-241270",
	                "Source": "/var/lib/docker/volumes/no-preload-241270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241270",
	                "name.minikube.sigs.k8s.io": "no-preload-241270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eef7bdd89ca732078c94f4927e3c7a21319eafbef30f0346d5566202053e4aac",
	            "SandboxKey": "/var/run/docker/netns/eef7bdd89ca7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:e5:39:6f:c0:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "509cbc0434c71e77097af60a2b0ce9a4473551172a41d0f484ec4e134db3ab73",
	                    "EndpointID": "3e81b46f5657325d06de99919670a1c40d711f2851cee0f84aa291f2a1c6cc3d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241270",
	                        "419e4a267ba5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270: exit status 6 (346.883033ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:43:22.226353  294260 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241270 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-943366                                                                                                                                                                                                                                  │ old-k8s-version-943366       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:31 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ delete  │ -p cert-expiration-379442                                                                                                                                                                                                                                  │ cert-expiration-379442       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:31 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-083143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p default-k8s-diff-port-083143 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-861489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p embed-certs-861489 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:34:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:34:54.564320  282781 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:34:54.564546  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564575  282781 out.go:374] Setting ErrFile to fd 2...
	I1205 07:34:54.564598  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564902  282781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:34:54.565440  282781 out.go:368] Setting JSON to false
	I1205 07:34:54.566401  282781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8241,"bootTime":1764911853,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:34:54.566509  282781 start.go:143] virtualization:  
	I1205 07:34:54.570672  282781 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:34:54.575010  282781 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:34:54.575073  282781 notify.go:221] Checking for updates...
	I1205 07:34:54.579441  282781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:34:54.582467  282781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:34:54.587377  282781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:34:54.590331  282781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:34:54.593234  282781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:34:54.596734  282781 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:54.596829  282781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:34:54.638746  282781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:34:54.638881  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.723110  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-05 07:34:54.71373112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.723208  282781 docker.go:319] overlay module found
	I1205 07:34:54.726530  282781 out.go:179] * Using the docker driver based on user configuration
	I1205 07:34:54.729826  282781 start.go:309] selected driver: docker
	I1205 07:34:54.729851  282781 start.go:927] validating driver "docker" against <nil>
	I1205 07:34:54.729865  282781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:34:54.730603  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.814061  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:34:54.80392623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.814216  282781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1205 07:34:54.814233  282781 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 07:34:54.814448  282781 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:34:54.817656  282781 out.go:179] * Using Docker driver with root privileges
	I1205 07:34:54.820449  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:34:54.820517  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:34:54.820533  282781 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:34:54.820632  282781 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:34:54.823652  282781 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:34:54.826400  282781 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:34:54.829321  282781 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:34:54.832159  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:54.832346  282781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:34:54.866220  282781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:34:54.866240  282781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:34:54.905418  282781 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:34:55.127272  282781 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:34:55.127472  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:34:55.127510  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json: {Name:mk199da181ecffa13d15cfa2c7c654b0a370d7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:55.127517  282781 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127770  282781 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127814  282781 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127984  282781 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128114  282781 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128248  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:34:55.128265  282781 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 153.635µs
	I1205 07:34:55.128280  282781 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128249  282781 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128370  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:34:55.128400  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:34:55.128415  282781 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 907.013µs
	I1205 07:34:55.128428  282781 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:34:55.128407  282781 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 179.719µs
	I1205 07:34:55.128464  282781 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128383  282781 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:34:55.128510  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:34:55.128522  282781 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 71.566µs
	I1205 07:34:55.128528  282781 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:34:55.128441  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:34:55.128638  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:34:55.128687  282781 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 705.903µs
	I1205 07:34:55.128729  282781 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:34:55.128474  282781 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:34:55.128644  282781 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 879.419µs
	I1205 07:34:55.128808  282781 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128298  282781 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128601  282781 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128935  282781 start.go:364] duration metric: took 65.568µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:34:55.128666  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:34:55.128988  282781 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 1.179238ms
	I1205 07:34:55.129009  282781 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128849  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:34:55.129040  282781 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 743.557µs
	I1205 07:34:55.129066  282781 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:34:55.129099  282781 cache.go:87] Successfully saved all images to host disk.
	I1205 07:34:55.128980  282781 start.go:93] Provisioning new machine with config: &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:34:55.129144  282781 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:34:51.482132  281419 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:51.482359  281419 start.go:159] libmachine.API.Create for "no-preload-241270" (driver="docker")
	I1205 07:34:51.482388  281419 client.go:173] LocalClient.Create starting
	I1205 07:34:51.482463  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:51.482494  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482510  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482565  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:51.482581  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482597  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482961  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:51.498656  281419 cli_runner.go:211] docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:51.498737  281419 network_create.go:284] running [docker network inspect no-preload-241270] to gather additional debugging logs...
	I1205 07:34:51.498754  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270
	W1205 07:34:51.515396  281419 cli_runner.go:211] docker network inspect no-preload-241270 returned with exit code 1
	I1205 07:34:51.515424  281419 network_create.go:287] error running [docker network inspect no-preload-241270]: docker network inspect no-preload-241270: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-241270 not found
	I1205 07:34:51.515453  281419 network_create.go:289] output of [docker network inspect no-preload-241270]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-241270 not found
	
	** /stderr **
	I1205 07:34:51.515547  281419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:51.540706  281419 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:51.541027  281419 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:51.541392  281419 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:51.541780  281419 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3e30}
	I1205 07:34:51.541797  281419 network_create.go:124] attempt to create docker network no-preload-241270 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1205 07:34:51.541855  281419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-241270 no-preload-241270
	I1205 07:34:51.644579  281419 network_create.go:108] docker network no-preload-241270 192.168.76.0/24 created
	I1205 07:34:51.644609  281419 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-241270" container
	I1205 07:34:51.644693  281419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:51.664403  281419 cli_runner.go:164] Run: docker volume create no-preload-241270 --label name.minikube.sigs.k8s.io=no-preload-241270 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:51.703596  281419 oci.go:103] Successfully created a docker volume no-preload-241270
	I1205 07:34:51.703699  281419 cli_runner.go:164] Run: docker run --rm --name no-preload-241270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --entrypoint /usr/bin/test -v no-preload-241270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:52.419093  281419 oci.go:107] Successfully prepared a docker volume no-preload-241270
	I1205 07:34:52.419152  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:52.419281  281419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:52.419402  281419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:52.474323  281419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-241270 --name no-preload-241270 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-241270 --network no-preload-241270 --ip 192.168.76.2 --volume no-preload-241270:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:52.844284  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Running}}
	I1205 07:34:52.871353  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:52.893044  281419 cli_runner.go:164] Run: docker exec no-preload-241270 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:52.971944  281419 oci.go:144] the created container "no-preload-241270" has a running status.
	I1205 07:34:52.971975  281419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa...
	I1205 07:34:53.768668  281419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:53.945530  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:53.965986  281419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:53.966005  281419 kic_runner.go:114] Args: [docker exec --privileged no-preload-241270 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:54.059371  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:54.108271  281419 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:54.108367  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.132985  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.133345  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.133356  281419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:54.333364  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.333388  281419 ubuntu.go:182] provisioning hostname "no-preload-241270"
	I1205 07:34:54.333541  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.369719  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.371863  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.371893  281419 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-241270 && echo "no-preload-241270" | sudo tee /etc/hostname
	I1205 07:34:54.574524  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.574606  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.599195  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.599492  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.599509  281419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241270/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:34:54.776549  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:34:54.776662  281419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:34:54.776695  281419 ubuntu.go:190] setting up certificates
	I1205 07:34:54.776705  281419 provision.go:84] configureAuth start
	I1205 07:34:54.776772  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:54.802455  281419 provision.go:143] copyHostCerts
	I1205 07:34:54.802525  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:34:54.802534  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:34:54.802614  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:34:54.802700  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:34:54.802706  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:34:54.802735  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:34:54.802784  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:34:54.802797  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:34:54.802821  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:34:54.802868  281419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.no-preload-241270 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-241270]
	I1205 07:34:55.021879  281419 provision.go:177] copyRemoteCerts
	I1205 07:34:55.021961  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:34:55.022007  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.042198  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.146207  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:34:55.175055  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:34:55.196310  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:34:55.228238  281419 provision.go:87] duration metric: took 451.519136ms to configureAuth
	I1205 07:34:55.228267  281419 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:34:55.228447  281419 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:55.228461  281419 machine.go:97] duration metric: took 1.120172831s to provisionDockerMachine
	I1205 07:34:55.228468  281419 client.go:176] duration metric: took 3.746074827s to LocalClient.Create
	I1205 07:34:55.228481  281419 start.go:167] duration metric: took 3.746124256s to libmachine.API.Create "no-preload-241270"
	I1205 07:34:55.228492  281419 start.go:293] postStartSetup for "no-preload-241270" (driver="docker")
	I1205 07:34:55.228503  281419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:34:55.228562  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:34:55.228610  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.249980  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.367085  281419 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:34:55.370694  281419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:34:55.370723  281419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:34:55.370734  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:34:55.370886  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:34:55.371031  281419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:34:55.371195  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:34:55.385389  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:34:55.415204  281419 start.go:296] duration metric: took 186.696466ms for postStartSetup
	I1205 07:34:55.415546  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.445124  281419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:34:55.445421  281419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:34:55.445469  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.465824  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.582588  281419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:34:55.589753  281419 start.go:128] duration metric: took 4.113009855s to createHost
	I1205 07:34:55.589783  281419 start.go:83] releasing machines lock for "no-preload-241270", held for 4.11313674s
	I1205 07:34:55.589860  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.609280  281419 ssh_runner.go:195] Run: cat /version.json
	I1205 07:34:55.609334  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.609553  281419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:34:55.609603  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.653271  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.667026  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.785816  281419 ssh_runner.go:195] Run: systemctl --version
	I1205 07:34:55.905848  281419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:34:55.913263  281419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:34:55.913352  281419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:34:55.955688  281419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:34:55.955713  281419 start.go:496] detecting cgroup driver to use...
	I1205 07:34:55.955752  281419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:34:55.955807  281419 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:34:55.978957  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:34:55.992668  281419 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:34:55.992774  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:34:56.017505  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:34:56.046827  281419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:34:56.209514  281419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:34:56.405533  281419 docker.go:234] disabling docker service ...
	I1205 07:34:56.405600  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:34:56.470263  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:34:56.503296  281419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:34:56.815584  281419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:34:57.031532  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:34:57.059667  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:34:57.093975  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:34:57.103230  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:34:57.112469  281419 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:34:57.112537  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:34:57.123144  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.134066  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:34:57.144317  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.156950  281419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:34:57.168939  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:34:57.179688  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:34:57.190637  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:34:57.206793  281419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:34:57.215781  281419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:34:57.226983  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:34:57.420977  281419 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:34:57.514033  281419 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:34:57.514159  281419 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:34:57.519057  281419 start.go:564] Will wait 60s for crictl version
	I1205 07:34:57.519141  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:57.523352  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:34:57.554146  281419 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:34:57.554218  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.577679  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.608177  281419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:34:55.134539  282781 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:55.134871  282781 start.go:159] libmachine.API.Create for "newest-cni-622440" (driver="docker")
	I1205 07:34:55.134936  282781 client.go:173] LocalClient.Create starting
	I1205 07:34:55.135040  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:55.135104  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135129  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135215  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:55.135272  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135292  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135778  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:55.152795  282781 cli_runner.go:211] docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:55.152912  282781 network_create.go:284] running [docker network inspect newest-cni-622440] to gather additional debugging logs...
	I1205 07:34:55.152946  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440
	W1205 07:34:55.170809  282781 cli_runner.go:211] docker network inspect newest-cni-622440 returned with exit code 1
	I1205 07:34:55.170837  282781 network_create.go:287] error running [docker network inspect newest-cni-622440]: docker network inspect newest-cni-622440: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-622440 not found
	I1205 07:34:55.170850  282781 network_create.go:289] output of [docker network inspect newest-cni-622440]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-622440 not found
	
	** /stderr **
	I1205 07:34:55.170942  282781 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:55.190601  282781 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:55.190913  282781 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:55.191232  282781 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:55.191506  282781 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-509cbc0434c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:5b:c8:fd:a0:2d} reservation:<nil>}
	I1205 07:34:55.191883  282781 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ab4b80}
	I1205 07:34:55.191903  282781 network_create.go:124] attempt to create docker network newest-cni-622440 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:34:55.191967  282781 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-622440 newest-cni-622440
	I1205 07:34:55.272466  282781 network_create.go:108] docker network newest-cni-622440 192.168.85.0/24 created
	I1205 07:34:55.272497  282781 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-622440" container
	I1205 07:34:55.272584  282781 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:55.299615  282781 cli_runner.go:164] Run: docker volume create newest-cni-622440 --label name.minikube.sigs.k8s.io=newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:55.321227  282781 oci.go:103] Successfully created a docker volume newest-cni-622440
	I1205 07:34:55.321330  282781 cli_runner.go:164] Run: docker run --rm --name newest-cni-622440-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --entrypoint /usr/bin/test -v newest-cni-622440:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:55.874194  282781 oci.go:107] Successfully prepared a docker volume newest-cni-622440
	I1205 07:34:55.874264  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:55.874410  282781 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:55.874535  282781 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:55.945833  282781 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-622440 --name newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-622440 --network newest-cni-622440 --ip 192.168.85.2 --volume newest-cni-622440:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:56.334301  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Running}}
	I1205 07:34:56.365095  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.392463  282781 cli_runner.go:164] Run: docker exec newest-cni-622440 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:56.460482  282781 oci.go:144] the created container "newest-cni-622440" has a running status.
	I1205 07:34:56.460517  282781 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa...
	I1205 07:34:56.767833  282781 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:56.791395  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.811902  282781 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:56.811920  282781 kic_runner.go:114] Args: [docker exec --privileged newest-cni-622440 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:56.902529  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.932575  282781 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:56.932686  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:34:56.953532  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:56.953863  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:34:56.953871  282781 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:56.954513  282781 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43638->127.0.0.1:33093: read: connection reset by peer
	I1205 07:34:57.611218  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:57.631313  281419 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:34:57.635595  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:34:57.647819  281419 kubeadm.go:884] updating cluster {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:34:57.647943  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:57.648012  281419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:34:57.675975  281419 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:34:57.675998  281419 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:34:57.676035  281419 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.676242  281419 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.676321  281419 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.676541  281419 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.676664  281419 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.676744  281419 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.676821  281419 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.677443  281419 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.678747  281419 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.679204  281419 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.679446  281419 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.679490  281419 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.679628  281419 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.679730  281419 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.680191  281419 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.680226  281419 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.993134  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:34:57.993255  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:34:58.022857  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:34:58.022958  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.035702  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:34:58.035816  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.068460  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:34:58.068586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.069026  281419 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:34:58.069090  281419 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:34:58.069183  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.069262  281419 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:34:58.069305  281419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.069349  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.074525  281419 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:34:58.074618  281419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.074694  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.084602  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:34:58.084753  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.093856  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:34:58.093981  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.103085  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.103156  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.103215  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.103214  281419 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:34:58.103271  281419 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.103296  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.115763  281419 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:34:58.115803  281419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.115854  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.116104  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:34:58.116140  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.154653  281419 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:34:58.154740  281419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.154818  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192178  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.192267  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.192272  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.192322  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.192364  281419 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:34:58.192395  281419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.192421  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192479  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.192482  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278470  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.278568  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.278766  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.278598  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.278641  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278681  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.278865  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387623  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387705  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.387774  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:34:58.387840  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.387886  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.387626  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387984  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.388070  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.387990  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387931  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.453644  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.453792  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:34:58.453804  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:34:58.453889  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 07:34:58.453762  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:34:58.453990  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.454049  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:34:58.454052  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:34:58.453951  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.453861  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:34:58.454295  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:34:58.453742  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.454372  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.542254  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.542568  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:34:58.542480  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:34:58.542630  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:34:58.542522  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542738  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:34:58.542768  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:34:58.578716  281419 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.578827  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.610540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.610912  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:34:58.888566  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:34:59.021211  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:59.021289  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1205 07:34:59.068346  281419 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:34:59.068498  281419 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:34:59.068572  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864558  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.795954788s)
	I1205 07:35:00.864602  281419 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:00.864631  281419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864683  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:35:00.864739  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.843433798s)
	I1205 07:35:00.864752  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:00.864766  281419 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.864805  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.873580  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.270776  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.270817  282781 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:35:00.270899  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.371937  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.372299  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.372312  282781 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:35:00.613599  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.613706  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.642684  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.643012  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.643028  282781 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:35:00.802014  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:35:00.802045  282781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:35:00.802091  282781 ubuntu.go:190] setting up certificates
	I1205 07:35:00.802110  282781 provision.go:84] configureAuth start
	I1205 07:35:00.802183  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:00.827426  282781 provision.go:143] copyHostCerts
	I1205 07:35:00.827511  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:35:00.827525  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:35:00.827605  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:35:00.827724  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:35:00.827738  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:35:00.827769  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:35:00.827834  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:35:00.827844  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:35:00.827871  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:35:00.827926  282781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:35:00.956019  282781 provision.go:177] copyRemoteCerts
	I1205 07:35:00.956232  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:35:00.956312  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.978988  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.089461  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:35:01.114938  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:35:01.142325  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:35:01.168254  282781 provision.go:87] duration metric: took 366.116888ms to configureAuth
	I1205 07:35:01.168340  282781 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:35:01.168591  282781 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:35:01.168634  282781 machine.go:97] duration metric: took 4.236039989s to provisionDockerMachine
	I1205 07:35:01.168665  282781 client.go:176] duration metric: took 6.033716203s to LocalClient.Create
	I1205 07:35:01.168718  282781 start.go:167] duration metric: took 6.033833045s to libmachine.API.Create "newest-cni-622440"
	I1205 07:35:01.168742  282781 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:35:01.168766  282781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:35:01.168850  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:35:01.168915  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.192294  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.311598  282781 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:35:01.315486  282781 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:35:01.315516  282781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:35:01.315528  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:35:01.315596  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:35:01.315698  282781 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:35:01.315872  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:35:01.326201  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:01.345964  282781 start.go:296] duration metric: took 177.196121ms for postStartSetup
	I1205 07:35:01.346371  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.368578  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:35:01.369047  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:35:01.369150  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.391110  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.495164  282781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:35:01.500376  282781 start.go:128] duration metric: took 6.371211814s to createHost
	I1205 07:35:01.500460  282781 start.go:83] releasing machines lock for "newest-cni-622440", held for 6.371509385s
	I1205 07:35:01.500553  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.520704  282781 ssh_runner.go:195] Run: cat /version.json
	I1205 07:35:01.520755  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.520758  282781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:35:01.520826  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.542832  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.554863  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.750909  282781 ssh_runner.go:195] Run: systemctl --version
	I1205 07:35:01.758230  282781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:35:01.763670  282781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:35:01.763742  282781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:35:01.797683  282781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:35:01.797709  282781 start.go:496] detecting cgroup driver to use...
	I1205 07:35:01.797743  282781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:35:01.797800  282781 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:35:01.813916  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:35:01.835990  282781 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:35:01.836078  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:35:01.856191  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:35:01.879473  282781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:35:02.016063  282781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:35:02.186714  282781 docker.go:234] disabling docker service ...
	I1205 07:35:02.186836  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:35:02.211433  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:35:02.226230  282781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:35:02.421061  282781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:35:02.574247  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:35:02.588525  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:35:02.604182  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:35:02.613394  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:35:02.623017  282781 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:35:02.623089  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:35:02.632544  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.643699  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:35:02.656090  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.667307  282781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:35:02.675494  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:35:02.685933  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:35:02.697515  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:35:02.708706  282781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:35:02.723371  282781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:35:02.736002  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:02.875115  282781 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:35:02.963803  282781 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:35:02.963902  282781 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:35:02.970220  282781 start.go:564] Will wait 60s for crictl version
	I1205 07:35:02.970310  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:02.974813  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:35:03.021266  282781 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:35:03.021367  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.047120  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.073256  282781 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:35:03.076375  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:35:03.098294  282781 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:35:03.105202  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:03.120382  282781 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:35:03.123255  282781 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:35:03.123408  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:35:03.123487  282781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:35:03.154394  282781 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:35:03.154422  282781 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:35:03.154478  282781 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.154682  282781 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.154778  282781 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.154866  282781 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.154957  282781 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.155040  282781 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.155127  282781 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.155218  282781 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.156724  282781 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.157068  282781 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.157467  282781 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.157620  282781 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.157862  282781 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.158016  282781 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.158145  282781 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.158257  282781 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.462330  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:35:03.462445  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.474342  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:35:03.474456  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.482905  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:35:03.483018  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:35:03.493712  282781 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:35:03.493818  282781 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.493879  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.495878  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:35:03.495977  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.503824  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:35:03.503953  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.548802  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:35:03.548918  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.563856  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:35:03.563966  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.564379  282781 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:35:03.564443  282781 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.564494  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564588  282781 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:35:03.564625  282781 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.564664  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564745  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.577731  282781 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:35:03.577812  282781 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.577873  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.594067  282781 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:35:03.594158  282781 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.594222  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.638413  282781 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:35:03.638520  282781 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.638583  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.647984  282781 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:35:03.648065  282781 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.648135  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.654578  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.654695  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.654792  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.654879  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.654956  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.659132  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.660475  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856118  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.856393  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.856229  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.856314  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.856258  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.856389  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856356  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073452  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073547  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:04.073616  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:04.073671  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:04.073727  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:35:04.073796  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:04.073863  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:04.073966  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:04.231226  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231396  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231480  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231559  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231639  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231719  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231791  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:35:04.231868  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:04.231943  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232023  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232099  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:35:04.232145  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:35:04.232313  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:35:04.232384  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:04.287892  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:35:04.287988  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288174  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:35:04.288039  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288247  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:35:04.288060  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288276  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:35:04.288078  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:35:04.288307  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:35:04.288093  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288348  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:35:04.288138  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1205 07:35:04.301689  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301786  282781 retry.go:31] will retry after 208.795928ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301815  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301854  282781 retry.go:31] will retry after 334.580121ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301882  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301902  282781 retry.go:31] will retry after 333.510577ms: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.510761  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.553911  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:02.712615  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.847781055s)
	I1205 07:35:02.712638  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:02.712660  281419 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712732  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712799  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.839195579s)
	I1205 07:35:02.712834  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087126  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.374270081s)
	I1205 07:35:04.087198  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087256  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.374512799s)
	I1205 07:35:04.087266  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:04.087283  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.087309  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:05.800879  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.713547867s)
	I1205 07:35:05.800904  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:05.800922  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.800970  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.801018  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.713803361s)
	I1205 07:35:05.801061  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:05.801141  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1205 07:35:04.593101  282781 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:35:04.593340  282781 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:35:04.593425  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.593492  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.620265  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.635918  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.637258  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.700758  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.710820  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.947887  282781 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:04.947982  282781 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.948060  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:05.052764  282781 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.052875  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.108225  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550590  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550699  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:35:05.550751  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:05.550805  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:07.127585  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.576745452s)
	I1205 07:35:07.127612  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:07.127630  282781 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127690  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127752  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.577096553s)
	I1205 07:35:07.127791  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:08.530003  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.40218711s)
	I1205 07:35:08.530052  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:08.530145  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.530206  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.402499844s)
	I1205 07:35:08.530219  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:08.530234  282781 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:08.530258  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:07.217489  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.416494396s)
	I1205 07:35:07.217512  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:07.217529  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217647  281419 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.416497334s)
	I1205 07:35:07.217660  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:07.217673  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:08.607664  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.390055936s)
	I1205 07:35:08.607697  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:08.607718  281419 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.607767  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:09.100321  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:09.100358  281419 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:09.100365  281419 cache_images.go:94] duration metric: took 11.42435306s to LoadCachedImages
	I1205 07:35:09.100377  281419 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:09.100482  281419 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-241270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:09.100558  281419 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:09.129301  281419 cni.go:84] Creating CNI manager for ""
	I1205 07:35:09.129326  281419 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:09.129345  281419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:35:09.129377  281419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241270 NodeName:no-preload-241270 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:09.129497  281419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-241270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:09.129569  281419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.142095  281419 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:09.142170  281419 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.156065  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:09.156176  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:09.156262  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:09.156299  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:09.156377  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:09.156425  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:09.179830  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:09.179870  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:09.179956  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:09.179975  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:09.180072  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:09.198397  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:09.198485  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:10.286113  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:10.299161  281419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:10.316251  281419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:10.331159  281419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 07:35:10.345735  281419 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:10.350335  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:10.363402  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:10.512811  281419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:10.529558  281419 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270 for IP: 192.168.76.2
	I1205 07:35:10.529629  281419 certs.go:195] generating shared ca certs ...
	I1205 07:35:10.529657  281419 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.529834  281419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:10.529923  281419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:10.529958  281419 certs.go:257] generating profile certs ...
	I1205 07:35:10.530038  281419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key
	I1205 07:35:10.530076  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt with IP's: []
	I1205 07:35:10.853605  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt ...
	I1205 07:35:10.853638  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt: {Name:mk2a843840c6e4a2de14fc26103351bbaff83f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.854971  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key ...
	I1205 07:35:10.854994  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key: {Name:mk2141bc22495cb299c026ddfd70c2cab1c5df09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.855117  281419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330
	I1205 07:35:10.855143  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1205 07:35:11.172976  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 ...
	I1205 07:35:11.173007  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330: {Name:mk727b4727c68f439905180851e5f305719107ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.173862  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 ...
	I1205 07:35:11.173894  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330: {Name:mk05e994b799e7321fe9fd9419571307eec1a124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.174674  281419 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt
	I1205 07:35:11.174770  281419 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key
	I1205 07:35:11.174852  281419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key
	I1205 07:35:11.174872  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt with IP's: []
	I1205 07:35:11.350910  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt ...
	I1205 07:35:11.350948  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt: {Name:mk7c9be3a839b00f099d02f39817919630f828cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.352352  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key ...
	I1205 07:35:11.352386  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key: {Name:mkf516ee46be6e2698cf5a62147058f957abc08a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.353684  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:11.353744  281419 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:11.353758  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:11.353787  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:11.353817  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:11.353849  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:11.353898  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:11.354490  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:11.381382  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:11.406241  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:11.428183  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:11.450978  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:11.476407  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:11.498851  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:11.519352  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:11.539765  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:11.559484  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:11.579911  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:11.600685  281419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:11.616084  281419 ssh_runner.go:195] Run: openssl version
	I1205 07:35:11.625728  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.635065  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:11.645233  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651040  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651153  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.693810  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.702555  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.710996  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.719477  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:11.727857  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732743  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732862  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.774767  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:11.783345  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:11.791961  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.801063  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:11.809888  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.814918  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.815034  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.857224  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:11.866093  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:11.874706  281419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:11.879598  281419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:11.879697  281419 kubeadm.go:401] StartCluster: {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:11.879803  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:11.879898  281419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:11.908036  281419 cri.go:89] found id: ""
	I1205 07:35:11.908156  281419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:11.919349  281419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:11.928155  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:11.928267  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:11.939709  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:11.939779  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:11.939856  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:11.949257  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:11.949365  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:11.957760  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:11.967055  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:11.967163  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:11.975295  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.984686  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:11.984797  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.994202  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:12.005520  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:12.005606  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:12.026031  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:12.083192  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:35:12.083309  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:35:12.193051  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:35:12.193150  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:35:12.193215  281419 kubeadm.go:319] OS: Linux
	I1205 07:35:12.193261  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:35:12.193313  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:35:12.193374  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:35:12.193426  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:35:12.193479  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:35:12.193529  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:35:12.193578  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:35:12.193684  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:35:12.193786  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:35:12.268365  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:35:12.268486  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:35:12.268582  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:35:12.276338  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:35:10.757563  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.227284144s)
	I1205 07:35:10.757586  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:10.757606  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757654  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757716  282781 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.227556574s)
	I1205 07:35:10.757730  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:10.757745  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:12.017290  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.259613359s)
	I1205 07:35:12.017315  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:12.017333  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:12.017393  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:13.470638  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.453225657s)
	I1205 07:35:13.470663  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:13.470680  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:13.470727  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:12.281185  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:35:12.281356  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:35:12.281459  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:35:12.381667  281419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:35:12.863385  281419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:35:13.114787  281419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:35:13.312565  281419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:35:13.794303  281419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:35:13.794935  281419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.299804  281419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:35:14.300371  281419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.449360  281419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:35:14.671722  281419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:35:15.172052  281419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:35:15.174002  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:35:15.463292  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:35:16.096919  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:35:16.336520  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:35:16.828502  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:35:17.109506  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:35:17.109613  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:35:17.109687  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:35:15.103687  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.632919174s)
	I1205 07:35:15.103711  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:15.103732  282781 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.103783  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.621241  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:15.621272  282781 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:15.621278  282781 cache_images.go:94] duration metric: took 12.466843247s to LoadCachedImages
	I1205 07:35:15.621292  282781 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:15.621381  282781 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:15.621444  282781 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:15.654017  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:35:15.654037  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:15.654053  282781 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:35:15.654081  282781 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:15.654199  282781 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:15.654267  282781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.664199  282781 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:15.664254  282781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.672856  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:15.672884  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:15.672938  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:15.672957  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:15.672855  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:15.672995  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:15.699685  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:15.699722  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:15.699741  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:15.699766  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:15.715022  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:15.749908  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:15.749948  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:16.655429  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:16.670290  282781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:16.693587  282781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:16.711778  282781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:35:16.725821  282781 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:16.730355  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:16.740137  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:16.867916  282781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:16.883411  282781 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:35:16.883478  282781 certs.go:195] generating shared ca certs ...
	I1205 07:35:16.883521  282781 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:16.883711  282781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:16.883800  282781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:16.883837  282781 certs.go:257] generating profile certs ...
	I1205 07:35:16.883935  282781 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:35:16.883965  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt with IP's: []
	I1205 07:35:17.059440  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt ...
	I1205 07:35:17.059534  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt: {Name:mk4216fda7b2560e6bf3adab97ae3109b56cd861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.059812  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key ...
	I1205 07:35:17.059867  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key: {Name:mk6502f52b6a29fc92d89b24a9497a31259c0a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.061509  282781 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:35:17.061580  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:35:17.406723  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 ...
	I1205 07:35:17.406756  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8: {Name:mk48869d32b8a5be7389357c612f9688b7f98edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407538  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 ...
	I1205 07:35:17.407563  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8: {Name:mk39f9d896537098c3c994d4ce7924ee6a49dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407660  282781 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt
	I1205 07:35:17.407739  282781 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key
	I1205 07:35:17.407802  282781 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:35:17.407822  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt with IP's: []
	I1205 07:35:17.656775  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt ...
	I1205 07:35:17.656814  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt: {Name:mkf75c55fc25a5343874cbc403686708a7f26c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657007  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key ...
	I1205 07:35:17.657024  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key: {Name:mk9047fe05ee73b34ef5e42f150f28bde6ac00b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657241  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:17.657291  282781 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:17.657303  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:17.657332  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:17.657363  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:17.657390  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:17.657440  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:17.658030  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:17.677123  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:17.695559  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:17.713701  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:17.731347  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:17.749295  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:17.766915  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:17.783871  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:17.801244  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:17.819265  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:17.836390  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:17.860517  282781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:17.875166  282781 ssh_runner.go:195] Run: openssl version
	I1205 07:35:17.882955  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.891095  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:17.899082  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903708  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903782  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.945497  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.952956  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.960147  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.967438  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:17.974447  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.977974  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.978088  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:18.019263  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:18.027845  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:18.036126  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.044084  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:18.052338  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056629  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056703  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.099363  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:18.107989  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:18.116260  282781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:18.120762  282781 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:18.120819  282781 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:18.120900  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:18.120961  282781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:18.149219  282781 cri.go:89] found id: ""
	I1205 07:35:18.149296  282781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:18.159871  282781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:18.168276  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:18.168340  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:18.176150  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:18.176181  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:18.176234  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:18.184056  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:18.184125  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:18.191302  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:18.198850  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:18.198918  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:18.206439  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.213847  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:18.213913  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.220993  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:18.228433  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:18.228548  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:18.235813  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:18.359095  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:35:18.359647  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:35:18.423544  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:35:17.113932  281419 out.go:252]   - Booting up control plane ...
	I1205 07:35:17.114055  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:35:17.130916  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:35:17.131000  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:35:17.144923  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:35:17.145031  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:35:17.153033  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:35:17.153136  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:35:17.153238  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:35:17.320155  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:35:17.320276  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:17.318333  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000477824s
	I1205 07:39:17.318360  281419 kubeadm.go:319] 
	I1205 07:39:17.318428  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:17.318462  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:17.318567  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:17.318571  281419 kubeadm.go:319] 
	I1205 07:39:17.318675  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:17.318708  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:17.318739  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:39:17.318744  281419 kubeadm.go:319] 
	I1205 07:39:17.323674  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:39:17.324139  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:39:17.324260  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:39:17.324546  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:39:17.324556  281419 kubeadm.go:319] 
	I1205 07:39:17.324629  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 07:39:17.324734  281419 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000477824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:17.324832  281419 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:17.734892  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:17.749336  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:17.749399  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:17.757730  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:17.757790  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:17.757850  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:17.766487  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:17.766564  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:17.774523  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:17.782748  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:17.782816  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:17.790744  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.798734  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:17.798821  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.806627  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:17.814519  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:17.814588  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:17.822487  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:17.863307  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:17.863481  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:17.933763  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:17.933840  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:17.933891  281419 kubeadm.go:319] OS: Linux
	I1205 07:39:17.933940  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:17.933992  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:17.934041  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:17.934092  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:17.934143  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:17.934200  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:17.934250  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:17.934300  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:17.934350  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:18.005121  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:18.005386  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:18.005505  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:18.013422  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:18.015372  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:18.015478  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:18.015552  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:18.015718  281419 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:18.016366  281419 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:18.016626  281419 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:18.017069  281419 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:18.017546  281419 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:18.017846  281419 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:18.018157  281419 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:18.018500  281419 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:18.018795  281419 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:18.018893  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:18.103696  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:18.482070  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:18.757043  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:18.907937  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:19.448057  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:19.448772  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:19.451764  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:19.453331  281419 out.go:252]   - Booting up control plane ...
	I1205 07:39:19.453502  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:19.453624  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:19.454383  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:19.477703  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:19.478043  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:19.486387  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:19.486517  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:19.486561  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:19.636438  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:19.636619  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.111676  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:39:22.111715  282781 kubeadm.go:319] 
	I1205 07:39:22.111850  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:39:22.120229  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.120296  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.120393  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.120460  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.120499  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.120549  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.120597  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.120654  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.120706  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.120774  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.120826  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.120871  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.120918  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.120970  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.121046  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.121144  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.121260  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.121329  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.122793  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.122965  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.123105  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.123184  282781 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:39:22.123243  282781 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:39:22.123304  282781 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:39:22.123355  282781 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:39:22.123409  282781 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:39:22.123531  282781 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123598  282781 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:39:22.123723  282781 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123789  282781 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:39:22.123857  282781 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:39:22.123902  282781 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:39:22.123959  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:22.124010  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:22.124072  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:22.124127  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:22.124191  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:22.124251  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:22.124334  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:22.124401  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:22.125727  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:22.125831  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:22.125912  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:22.125982  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:22.126088  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:22.126182  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:22.126289  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:22.126374  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:22.126419  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:22.126558  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:22.126665  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.126733  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000670148s
	I1205 07:39:22.126738  282781 kubeadm.go:319] 
	I1205 07:39:22.126805  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:22.126840  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:22.126951  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:22.126955  282781 kubeadm.go:319] 
	I1205 07:39:22.127067  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:22.127100  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:22.127131  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 07:39:22.127242  282781 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000670148s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:22.127337  282781 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:22.127648  282781 kubeadm.go:319] 
	I1205 07:39:22.555931  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:22.571474  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:22.571542  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:22.579138  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:22.579159  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:22.579236  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:22.586998  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:22.587095  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:22.597974  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:22.612071  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:22.612169  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:22.620438  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.629905  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:22.629992  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.637890  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:22.646753  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:22.646849  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:22.655118  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:22.694938  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.695040  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.766969  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.767067  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.767130  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.767228  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.767293  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.767344  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.767408  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.767460  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.767518  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.767564  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.767626  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.767685  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.833955  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.834079  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.834176  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.845649  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.848548  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.848634  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.848703  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.848782  282781 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:22.848843  282781 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:22.848912  282781 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:22.848966  282781 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:22.849031  282781 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:22.849092  282781 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:22.849211  282781 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:22.849285  282781 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:22.849326  282781 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:22.849379  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:23.141457  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:23.628614  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:24.042217  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:24.241513  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:24.738880  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:24.739414  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:24.742365  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:24.744249  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:24.744385  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:24.744476  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:24.746446  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:24.766106  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:24.766217  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:24.773547  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:24.773863  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:24.773913  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:24.911724  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:24.911843  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:43:19.629743  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000979602s
	I1205 07:43:19.629776  281419 kubeadm.go:319] 
	I1205 07:43:19.629841  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:19.629881  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:19.629992  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:19.630000  281419 kubeadm.go:319] 
	I1205 07:43:19.630105  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:19.630141  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:19.630176  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:19.630185  281419 kubeadm.go:319] 
	I1205 07:43:19.633703  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:19.634129  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:19.634243  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:19.634512  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:19.634521  281419 kubeadm.go:319] 
	I1205 07:43:19.634601  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:19.634654  281419 kubeadm.go:403] duration metric: took 8m7.754963643s to StartCluster
	I1205 07:43:19.634689  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:19.634770  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:19.664154  281419 cri.go:89] found id: ""
	I1205 07:43:19.664178  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.664186  281419 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:19.664194  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:19.664259  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:19.688943  281419 cri.go:89] found id: ""
	I1205 07:43:19.689027  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.689051  281419 logs.go:284] No container was found matching "etcd"
	I1205 07:43:19.689071  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:19.689145  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:19.714243  281419 cri.go:89] found id: ""
	I1205 07:43:19.714266  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.714278  281419 logs.go:284] No container was found matching "coredns"
	I1205 07:43:19.714285  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:19.714344  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:19.739300  281419 cri.go:89] found id: ""
	I1205 07:43:19.739326  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.739334  281419 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:19.739341  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:19.739409  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:19.764133  281419 cri.go:89] found id: ""
	I1205 07:43:19.764158  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.764168  281419 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:19.764174  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:19.764233  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:19.791591  281419 cri.go:89] found id: ""
	I1205 07:43:19.791655  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.791670  281419 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:19.791678  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:19.791736  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:19.817073  281419 cri.go:89] found id: ""
	I1205 07:43:19.817096  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.817104  281419 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:19.817113  281419 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:19.817124  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:19.884361  281419 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:19.886664  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:43:19.933532  281419 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:19.933565  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:20.000746  281419 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:20.000782  281419 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:20.000794  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:20.048127  281419 logs.go:123] Gathering logs for container status ...
	I1205 07:43:20.048164  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:43:20.079198  281419 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:20.079257  281419 out.go:285] * 
	W1205 07:43:20.079339  281419 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.079395  281419 out.go:285] * 
	W1205 07:43:20.081583  281419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:20.084896  281419 out.go:203] 
	W1205 07:43:20.086596  281419 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.086704  281419 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:20.086780  281419 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:20.088336  281419 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:35:00 no-preload-241270 containerd[758]: time="2025-12-05T07:35:00.872186619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.701941885Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.704289218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.722125402Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.722911774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.075081950Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.078766218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.099917836Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.100531825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.790505473Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.792674113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.806940960Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.807327368Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.207463637Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.209905191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.218221241Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.219001377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.595991834Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.598386708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.607030393Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.608072538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.091545558Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.093932416Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.108389516Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.108843487Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:22.864706    5773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:22.877419    5773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:22.879012    5773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:22.879555    5773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:22.881065    5773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:43:22 up  2:25,  0 user,  load average: 0.69, 1.05, 1.70
	Linux no-preload-241270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:43:19 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:20 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 05 07:43:20 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:20 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:20 no-preload-241270 kubelet[5558]: E1205 07:43:20.696996    5558 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:20 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:20 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:21 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 05 07:43:21 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:21 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:21 no-preload-241270 kubelet[5653]: E1205 07:43:21.473138    5653 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:21 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:21 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:22 no-preload-241270 kubelet[5681]: E1205 07:43:22.182402    5681 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:22 no-preload-241270 kubelet[5777]: E1205 07:43:22.932436    5777 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 6 (340.422114ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:43:23.455446  294489 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-241270
helpers_test.go:243: (dbg) docker inspect no-preload-241270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	        "Created": "2025-12-05T07:34:52.488952391Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281858,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:34:52.549450094Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hosts",
	        "LogPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896-json.log",
	        "Name": "/no-preload-241270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	                "LowerDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-241270",
	                "Source": "/var/lib/docker/volumes/no-preload-241270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241270",
	                "name.minikube.sigs.k8s.io": "no-preload-241270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eef7bdd89ca732078c94f4927e3c7a21319eafbef30f0346d5566202053e4aac",
	            "SandboxKey": "/var/run/docker/netns/eef7bdd89ca7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:e5:39:6f:c0:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "509cbc0434c71e77097af60a2b0ce9a4473551172a41d0f484ec4e134db3ab73",
	                    "EndpointID": "3e81b46f5657325d06de99919670a1c40d711f2851cee0f84aa291f2a1c6cc3d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241270",
	                        "419e4a267ba5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270: exit status 6 (348.984675ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:43:23.821574  294566 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241270 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-943366                                                                                                                                                                                                                                  │ old-k8s-version-943366       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:31 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ delete  │ -p cert-expiration-379442                                                                                                                                                                                                                                  │ cert-expiration-379442       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:31 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-083143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p default-k8s-diff-port-083143 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-861489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p embed-certs-861489 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:34:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:34:54.564320  282781 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:34:54.564546  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564575  282781 out.go:374] Setting ErrFile to fd 2...
	I1205 07:34:54.564598  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564902  282781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:34:54.565440  282781 out.go:368] Setting JSON to false
	I1205 07:34:54.566401  282781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8241,"bootTime":1764911853,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:34:54.566509  282781 start.go:143] virtualization:  
	I1205 07:34:54.570672  282781 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:34:54.575010  282781 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:34:54.575073  282781 notify.go:221] Checking for updates...
	I1205 07:34:54.579441  282781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:34:54.582467  282781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:34:54.587377  282781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:34:54.590331  282781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:34:54.593234  282781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:34:54.596734  282781 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:54.596829  282781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:34:54.638746  282781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:34:54.638881  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.723110  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-05 07:34:54.71373112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.723208  282781 docker.go:319] overlay module found
	I1205 07:34:54.726530  282781 out.go:179] * Using the docker driver based on user configuration
	I1205 07:34:54.729826  282781 start.go:309] selected driver: docker
	I1205 07:34:54.729851  282781 start.go:927] validating driver "docker" against <nil>
	I1205 07:34:54.729865  282781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:34:54.730603  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.814061  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:34:54.80392623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.814216  282781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1205 07:34:54.814233  282781 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 07:34:54.814448  282781 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:34:54.817656  282781 out.go:179] * Using Docker driver with root privileges
	I1205 07:34:54.820449  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:34:54.820517  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:34:54.820533  282781 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:34:54.820632  282781 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:34:54.823652  282781 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:34:54.826400  282781 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:34:54.829321  282781 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:34:54.832159  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:54.832346  282781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:34:54.866220  282781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:34:54.866240  282781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:34:54.905418  282781 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:34:55.127272  282781 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:34:55.127472  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:34:55.127510  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json: {Name:mk199da181ecffa13d15cfa2c7c654b0a370d7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:55.127517  282781 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127770  282781 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127814  282781 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127984  282781 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128114  282781 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128248  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:34:55.128265  282781 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 153.635µs
	I1205 07:34:55.128280  282781 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128249  282781 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128370  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:34:55.128400  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:34:55.128415  282781 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 907.013µs
	I1205 07:34:55.128428  282781 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:34:55.128407  282781 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 179.719µs
	I1205 07:34:55.128464  282781 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128383  282781 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:34:55.128510  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:34:55.128522  282781 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 71.566µs
	I1205 07:34:55.128528  282781 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:34:55.128441  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:34:55.128638  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:34:55.128687  282781 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 705.903µs
	I1205 07:34:55.128729  282781 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:34:55.128474  282781 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:34:55.128644  282781 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 879.419µs
	I1205 07:34:55.128808  282781 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128298  282781 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128601  282781 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128935  282781 start.go:364] duration metric: took 65.568µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:34:55.128666  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:34:55.128988  282781 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 1.179238ms
	I1205 07:34:55.129009  282781 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128849  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:34:55.129040  282781 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 743.557µs
	I1205 07:34:55.129066  282781 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:34:55.129099  282781 cache.go:87] Successfully saved all images to host disk.
	I1205 07:34:55.128980  282781 start.go:93] Provisioning new machine with config: &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:34:55.129144  282781 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:34:51.482132  281419 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:51.482359  281419 start.go:159] libmachine.API.Create for "no-preload-241270" (driver="docker")
	I1205 07:34:51.482388  281419 client.go:173] LocalClient.Create starting
	I1205 07:34:51.482463  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:51.482494  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482510  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482565  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:51.482581  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482597  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482961  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:51.498656  281419 cli_runner.go:211] docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:51.498737  281419 network_create.go:284] running [docker network inspect no-preload-241270] to gather additional debugging logs...
	I1205 07:34:51.498754  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270
	W1205 07:34:51.515396  281419 cli_runner.go:211] docker network inspect no-preload-241270 returned with exit code 1
	I1205 07:34:51.515424  281419 network_create.go:287] error running [docker network inspect no-preload-241270]: docker network inspect no-preload-241270: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-241270 not found
	I1205 07:34:51.515453  281419 network_create.go:289] output of [docker network inspect no-preload-241270]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-241270 not found
	
	** /stderr **
	I1205 07:34:51.515547  281419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:51.540706  281419 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:51.541027  281419 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:51.541392  281419 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:51.541780  281419 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3e30}
	I1205 07:34:51.541797  281419 network_create.go:124] attempt to create docker network no-preload-241270 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1205 07:34:51.541855  281419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-241270 no-preload-241270
	I1205 07:34:51.644579  281419 network_create.go:108] docker network no-preload-241270 192.168.76.0/24 created
	I1205 07:34:51.644609  281419 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-241270" container
	I1205 07:34:51.644693  281419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:51.664403  281419 cli_runner.go:164] Run: docker volume create no-preload-241270 --label name.minikube.sigs.k8s.io=no-preload-241270 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:51.703596  281419 oci.go:103] Successfully created a docker volume no-preload-241270
	I1205 07:34:51.703699  281419 cli_runner.go:164] Run: docker run --rm --name no-preload-241270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --entrypoint /usr/bin/test -v no-preload-241270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:52.419093  281419 oci.go:107] Successfully prepared a docker volume no-preload-241270
	I1205 07:34:52.419152  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:52.419281  281419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:52.419402  281419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:52.474323  281419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-241270 --name no-preload-241270 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-241270 --network no-preload-241270 --ip 192.168.76.2 --volume no-preload-241270:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:52.844284  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Running}}
	I1205 07:34:52.871353  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:52.893044  281419 cli_runner.go:164] Run: docker exec no-preload-241270 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:52.971944  281419 oci.go:144] the created container "no-preload-241270" has a running status.
	I1205 07:34:52.971975  281419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa...
	I1205 07:34:53.768668  281419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:53.945530  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:53.965986  281419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:53.966005  281419 kic_runner.go:114] Args: [docker exec --privileged no-preload-241270 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:54.059371  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:54.108271  281419 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:54.108367  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.132985  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.133345  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.133356  281419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:54.333364  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.333388  281419 ubuntu.go:182] provisioning hostname "no-preload-241270"
	I1205 07:34:54.333541  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.369719  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.371863  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.371893  281419 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-241270 && echo "no-preload-241270" | sudo tee /etc/hostname
	I1205 07:34:54.574524  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.574606  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.599195  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.599492  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.599509  281419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241270/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:34:54.776549  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:34:54.776662  281419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:34:54.776695  281419 ubuntu.go:190] setting up certificates
	I1205 07:34:54.776705  281419 provision.go:84] configureAuth start
	I1205 07:34:54.776772  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:54.802455  281419 provision.go:143] copyHostCerts
	I1205 07:34:54.802525  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:34:54.802534  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:34:54.802614  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:34:54.802700  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:34:54.802706  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:34:54.802735  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:34:54.802784  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:34:54.802797  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:34:54.802821  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:34:54.802868  281419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.no-preload-241270 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-241270]
	I1205 07:34:55.021879  281419 provision.go:177] copyRemoteCerts
	I1205 07:34:55.021961  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:34:55.022007  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.042198  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.146207  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:34:55.175055  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:34:55.196310  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:34:55.228238  281419 provision.go:87] duration metric: took 451.519136ms to configureAuth
	I1205 07:34:55.228267  281419 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:34:55.228447  281419 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:55.228461  281419 machine.go:97] duration metric: took 1.120172831s to provisionDockerMachine
	I1205 07:34:55.228468  281419 client.go:176] duration metric: took 3.746074827s to LocalClient.Create
	I1205 07:34:55.228481  281419 start.go:167] duration metric: took 3.746124256s to libmachine.API.Create "no-preload-241270"
	I1205 07:34:55.228492  281419 start.go:293] postStartSetup for "no-preload-241270" (driver="docker")
	I1205 07:34:55.228503  281419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:34:55.228562  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:34:55.228610  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.249980  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.367085  281419 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:34:55.370694  281419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:34:55.370723  281419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:34:55.370734  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:34:55.370886  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:34:55.371031  281419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:34:55.371195  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:34:55.385389  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:34:55.415204  281419 start.go:296] duration metric: took 186.696466ms for postStartSetup
	I1205 07:34:55.415546  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.445124  281419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:34:55.445421  281419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:34:55.445469  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.465824  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.582588  281419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:34:55.589753  281419 start.go:128] duration metric: took 4.113009855s to createHost
	I1205 07:34:55.589783  281419 start.go:83] releasing machines lock for "no-preload-241270", held for 4.11313674s
	I1205 07:34:55.589860  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.609280  281419 ssh_runner.go:195] Run: cat /version.json
	I1205 07:34:55.609334  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.609553  281419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:34:55.609603  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.653271  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.667026  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.785816  281419 ssh_runner.go:195] Run: systemctl --version
	I1205 07:34:55.905848  281419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:34:55.913263  281419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:34:55.913352  281419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:34:55.955688  281419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:34:55.955713  281419 start.go:496] detecting cgroup driver to use...
	I1205 07:34:55.955752  281419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:34:55.955807  281419 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:34:55.978957  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:34:55.992668  281419 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:34:55.992774  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:34:56.017505  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:34:56.046827  281419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:34:56.209514  281419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:34:56.405533  281419 docker.go:234] disabling docker service ...
	I1205 07:34:56.405600  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:34:56.470263  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:34:56.503296  281419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:34:56.815584  281419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:34:57.031532  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:34:57.059667  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:34:57.093975  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:34:57.103230  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:34:57.112469  281419 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:34:57.112537  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:34:57.123144  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.134066  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:34:57.144317  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.156950  281419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:34:57.168939  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:34:57.179688  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:34:57.190637  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:34:57.206793  281419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:34:57.215781  281419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:34:57.226983  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:34:57.420977  281419 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:34:57.514033  281419 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:34:57.514159  281419 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:34:57.519057  281419 start.go:564] Will wait 60s for crictl version
	I1205 07:34:57.519141  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:57.523352  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:34:57.554146  281419 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:34:57.554218  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.577679  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.608177  281419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:34:55.134539  282781 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:55.134871  282781 start.go:159] libmachine.API.Create for "newest-cni-622440" (driver="docker")
	I1205 07:34:55.134936  282781 client.go:173] LocalClient.Create starting
	I1205 07:34:55.135040  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:55.135104  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135129  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135215  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:55.135272  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135292  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135778  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:55.152795  282781 cli_runner.go:211] docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:55.152912  282781 network_create.go:284] running [docker network inspect newest-cni-622440] to gather additional debugging logs...
	I1205 07:34:55.152946  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440
	W1205 07:34:55.170809  282781 cli_runner.go:211] docker network inspect newest-cni-622440 returned with exit code 1
	I1205 07:34:55.170837  282781 network_create.go:287] error running [docker network inspect newest-cni-622440]: docker network inspect newest-cni-622440: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-622440 not found
	I1205 07:34:55.170850  282781 network_create.go:289] output of [docker network inspect newest-cni-622440]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-622440 not found
	
	** /stderr **
	I1205 07:34:55.170942  282781 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:55.190601  282781 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:55.190913  282781 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:55.191232  282781 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:55.191506  282781 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-509cbc0434c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:5b:c8:fd:a0:2d} reservation:<nil>}
	I1205 07:34:55.191883  282781 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ab4b80}
	I1205 07:34:55.191903  282781 network_create.go:124] attempt to create docker network newest-cni-622440 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:34:55.191967  282781 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-622440 newest-cni-622440
	I1205 07:34:55.272466  282781 network_create.go:108] docker network newest-cni-622440 192.168.85.0/24 created
	I1205 07:34:55.272497  282781 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-622440" container
	I1205 07:34:55.272584  282781 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:55.299615  282781 cli_runner.go:164] Run: docker volume create newest-cni-622440 --label name.minikube.sigs.k8s.io=newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:55.321227  282781 oci.go:103] Successfully created a docker volume newest-cni-622440
	I1205 07:34:55.321330  282781 cli_runner.go:164] Run: docker run --rm --name newest-cni-622440-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --entrypoint /usr/bin/test -v newest-cni-622440:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:55.874194  282781 oci.go:107] Successfully prepared a docker volume newest-cni-622440
	I1205 07:34:55.874264  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:55.874410  282781 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:55.874535  282781 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:55.945833  282781 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-622440 --name newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-622440 --network newest-cni-622440 --ip 192.168.85.2 --volume newest-cni-622440:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:56.334301  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Running}}
	I1205 07:34:56.365095  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.392463  282781 cli_runner.go:164] Run: docker exec newest-cni-622440 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:56.460482  282781 oci.go:144] the created container "newest-cni-622440" has a running status.
	I1205 07:34:56.460517  282781 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa...
	I1205 07:34:56.767833  282781 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:56.791395  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.811902  282781 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:56.811920  282781 kic_runner.go:114] Args: [docker exec --privileged newest-cni-622440 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:56.902529  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.932575  282781 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:56.932686  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:34:56.953532  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:56.953863  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:34:56.953871  282781 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:56.954513  282781 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43638->127.0.0.1:33093: read: connection reset by peer
	I1205 07:34:57.611218  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:57.631313  281419 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:34:57.635595  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:34:57.647819  281419 kubeadm.go:884] updating cluster {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:34:57.647943  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:57.648012  281419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:34:57.675975  281419 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:34:57.675998  281419 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:34:57.676035  281419 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.676242  281419 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.676321  281419 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.676541  281419 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.676664  281419 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.676744  281419 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.676821  281419 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.677443  281419 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.678747  281419 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.679204  281419 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.679446  281419 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.679490  281419 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.679628  281419 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.679730  281419 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.680191  281419 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.680226  281419 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.993134  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:34:57.993255  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:34:58.022857  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:34:58.022958  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.035702  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:34:58.035816  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.068460  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:34:58.068586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.069026  281419 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:34:58.069090  281419 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:34:58.069183  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.069262  281419 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:34:58.069305  281419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.069349  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.074525  281419 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:34:58.074618  281419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.074694  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.084602  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:34:58.084753  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.093856  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:34:58.093981  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.103085  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.103156  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.103215  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.103214  281419 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:34:58.103271  281419 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.103296  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.115763  281419 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:34:58.115803  281419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.115854  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.116104  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:34:58.116140  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.154653  281419 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:34:58.154740  281419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.154818  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192178  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.192267  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.192272  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.192322  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.192364  281419 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:34:58.192395  281419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.192421  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192479  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.192482  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278470  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.278568  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.278766  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.278598  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.278641  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278681  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.278865  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387623  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387705  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.387774  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:34:58.387840  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.387886  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.387626  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387984  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.388070  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.387990  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387931  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.453644  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.453792  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:34:58.453804  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:34:58.453889  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 07:34:58.453762  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:34:58.453990  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.454049  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:34:58.454052  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:34:58.453951  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.453861  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:34:58.454295  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:34:58.453742  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.454372  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.542254  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.542568  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:34:58.542480  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:34:58.542630  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:34:58.542522  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542738  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:34:58.542768  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:34:58.578716  281419 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.578827  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.610540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.610912  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:34:58.888566  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:34:59.021211  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:59.021289  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1205 07:34:59.068346  281419 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:34:59.068498  281419 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:34:59.068572  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864558  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.795954788s)
	I1205 07:35:00.864602  281419 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:00.864631  281419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864683  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:35:00.864739  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.843433798s)
	I1205 07:35:00.864752  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:00.864766  281419 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.864805  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.873580  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.270776  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.270817  282781 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:35:00.270899  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.371937  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.372299  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.372312  282781 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:35:00.613599  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.613706  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.642684  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.643012  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.643028  282781 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:35:00.802014  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:35:00.802045  282781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:35:00.802091  282781 ubuntu.go:190] setting up certificates
	I1205 07:35:00.802110  282781 provision.go:84] configureAuth start
	I1205 07:35:00.802183  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:00.827426  282781 provision.go:143] copyHostCerts
	I1205 07:35:00.827511  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:35:00.827525  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:35:00.827605  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:35:00.827724  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:35:00.827738  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:35:00.827769  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:35:00.827834  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:35:00.827844  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:35:00.827871  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:35:00.827926  282781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:35:00.956019  282781 provision.go:177] copyRemoteCerts
	I1205 07:35:00.956232  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:35:00.956312  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.978988  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.089461  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:35:01.114938  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:35:01.142325  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:35:01.168254  282781 provision.go:87] duration metric: took 366.116888ms to configureAuth
	I1205 07:35:01.168340  282781 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:35:01.168591  282781 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:35:01.168634  282781 machine.go:97] duration metric: took 4.236039989s to provisionDockerMachine
	I1205 07:35:01.168665  282781 client.go:176] duration metric: took 6.033716203s to LocalClient.Create
	I1205 07:35:01.168718  282781 start.go:167] duration metric: took 6.033833045s to libmachine.API.Create "newest-cni-622440"
	I1205 07:35:01.168742  282781 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:35:01.168766  282781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:35:01.168850  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:35:01.168915  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.192294  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.311598  282781 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:35:01.315486  282781 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:35:01.315516  282781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:35:01.315528  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:35:01.315596  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:35:01.315698  282781 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:35:01.315872  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:35:01.326201  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:01.345964  282781 start.go:296] duration metric: took 177.196121ms for postStartSetup
	I1205 07:35:01.346371  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.368578  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:35:01.369047  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:35:01.369150  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.391110  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.495164  282781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:35:01.500376  282781 start.go:128] duration metric: took 6.371211814s to createHost
	I1205 07:35:01.500460  282781 start.go:83] releasing machines lock for "newest-cni-622440", held for 6.371509385s
	I1205 07:35:01.500553  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.520704  282781 ssh_runner.go:195] Run: cat /version.json
	I1205 07:35:01.520755  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.520758  282781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:35:01.520826  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.542832  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.554863  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.750909  282781 ssh_runner.go:195] Run: systemctl --version
	I1205 07:35:01.758230  282781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:35:01.763670  282781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:35:01.763742  282781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:35:01.797683  282781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:35:01.797709  282781 start.go:496] detecting cgroup driver to use...
	I1205 07:35:01.797743  282781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:35:01.797800  282781 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:35:01.813916  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:35:01.835990  282781 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:35:01.836078  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:35:01.856191  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:35:01.879473  282781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:35:02.016063  282781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:35:02.186714  282781 docker.go:234] disabling docker service ...
	I1205 07:35:02.186836  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:35:02.211433  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:35:02.226230  282781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:35:02.421061  282781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:35:02.574247  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:35:02.588525  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:35:02.604182  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:35:02.613394  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:35:02.623017  282781 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:35:02.623089  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:35:02.632544  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.643699  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:35:02.656090  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.667307  282781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:35:02.675494  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:35:02.685933  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:35:02.697515  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:35:02.708706  282781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:35:02.723371  282781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:35:02.736002  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:02.875115  282781 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:35:02.963803  282781 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:35:02.963902  282781 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:35:02.970220  282781 start.go:564] Will wait 60s for crictl version
	I1205 07:35:02.970310  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:02.974813  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:35:03.021266  282781 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:35:03.021367  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.047120  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.073256  282781 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:35:03.076375  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:35:03.098294  282781 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:35:03.105202  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:03.120382  282781 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:35:03.123255  282781 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:35:03.123408  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:35:03.123487  282781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:35:03.154394  282781 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:35:03.154422  282781 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:35:03.154478  282781 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.154682  282781 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.154778  282781 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.154866  282781 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.154957  282781 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.155040  282781 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.155127  282781 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.155218  282781 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.156724  282781 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.157068  282781 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.157467  282781 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.157620  282781 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.157862  282781 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.158016  282781 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.158145  282781 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.158257  282781 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.462330  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:35:03.462445  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.474342  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:35:03.474456  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.482905  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:35:03.483018  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:35:03.493712  282781 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:35:03.493818  282781 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.493879  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.495878  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:35:03.495977  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.503824  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:35:03.503953  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.548802  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:35:03.548918  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.563856  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:35:03.563966  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.564379  282781 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:35:03.564443  282781 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.564494  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564588  282781 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:35:03.564625  282781 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.564664  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564745  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.577731  282781 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:35:03.577812  282781 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.577873  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.594067  282781 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:35:03.594158  282781 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.594222  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.638413  282781 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:35:03.638520  282781 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.638583  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.647984  282781 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:35:03.648065  282781 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.648135  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.654578  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.654695  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.654792  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.654879  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.654956  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.659132  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.660475  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856118  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.856393  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.856229  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.856314  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.856258  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.856389  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856356  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073452  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073547  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:04.073616  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:04.073671  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:04.073727  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:35:04.073796  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:04.073863  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:04.073966  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:04.231226  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231396  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231480  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231559  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231639  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231719  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231791  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:35:04.231868  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:04.231943  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232023  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232099  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:35:04.232145  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:35:04.232313  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:35:04.232384  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:04.287892  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:35:04.287988  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288174  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:35:04.288039  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288247  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:35:04.288060  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288276  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:35:04.288078  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:35:04.288307  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:35:04.288093  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288348  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:35:04.288138  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1205 07:35:04.301689  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301786  282781 retry.go:31] will retry after 208.795928ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301815  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301854  282781 retry.go:31] will retry after 334.580121ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301882  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301902  282781 retry.go:31] will retry after 333.510577ms: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.510761  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.553911  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:02.712615  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.847781055s)
	I1205 07:35:02.712638  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:02.712660  281419 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712732  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712799  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.839195579s)
	I1205 07:35:02.712834  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087126  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.374270081s)
	I1205 07:35:04.087198  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087256  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.374512799s)
	I1205 07:35:04.087266  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:04.087283  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.087309  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:05.800879  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.713547867s)
	I1205 07:35:05.800904  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:05.800922  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.800970  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.801018  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.713803361s)
	I1205 07:35:05.801061  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:05.801141  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1205 07:35:04.593101  282781 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:35:04.593340  282781 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:35:04.593425  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.593492  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.620265  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.635918  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.637258  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.700758  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.710820  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.947887  282781 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:04.947982  282781 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.948060  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:05.052764  282781 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.052875  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.108225  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550590  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550699  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:35:05.550751  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:05.550805  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:07.127585  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.576745452s)
	I1205 07:35:07.127612  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:07.127630  282781 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127690  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127752  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.577096553s)
	I1205 07:35:07.127791  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:08.530003  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.40218711s)
	I1205 07:35:08.530052  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:08.530145  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.530206  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.402499844s)
	I1205 07:35:08.530219  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:08.530234  282781 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:08.530258  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:07.217489  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.416494396s)
	I1205 07:35:07.217512  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:07.217529  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217647  281419 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.416497334s)
	I1205 07:35:07.217660  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:07.217673  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:08.607664  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.390055936s)
	I1205 07:35:08.607697  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:08.607718  281419 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.607767  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:09.100321  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:09.100358  281419 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:09.100365  281419 cache_images.go:94] duration metric: took 11.42435306s to LoadCachedImages
	I1205 07:35:09.100377  281419 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:09.100482  281419 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-241270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:09.100558  281419 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:09.129301  281419 cni.go:84] Creating CNI manager for ""
	I1205 07:35:09.129326  281419 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:09.129345  281419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:35:09.129377  281419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241270 NodeName:no-preload-241270 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:09.129497  281419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-241270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:09.129569  281419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.142095  281419 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:09.142170  281419 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.156065  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:09.156176  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:09.156262  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:09.156299  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:09.156377  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:09.156425  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:09.179830  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:09.179870  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:09.179956  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:09.179975  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:09.180072  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:09.198397  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:09.198485  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:10.286113  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:10.299161  281419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:10.316251  281419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:10.331159  281419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 07:35:10.345735  281419 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:10.350335  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:10.363402  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:10.512811  281419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:10.529558  281419 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270 for IP: 192.168.76.2
	I1205 07:35:10.529629  281419 certs.go:195] generating shared ca certs ...
	I1205 07:35:10.529657  281419 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.529834  281419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:10.529923  281419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:10.529958  281419 certs.go:257] generating profile certs ...
	I1205 07:35:10.530038  281419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key
	I1205 07:35:10.530076  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt with IP's: []
	I1205 07:35:10.853605  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt ...
	I1205 07:35:10.853638  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt: {Name:mk2a843840c6e4a2de14fc26103351bbaff83f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.854971  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key ...
	I1205 07:35:10.854994  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key: {Name:mk2141bc22495cb299c026ddfd70c2cab1c5df09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.855117  281419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330
	I1205 07:35:10.855143  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1205 07:35:11.172976  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 ...
	I1205 07:35:11.173007  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330: {Name:mk727b4727c68f439905180851e5f305719107ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.173862  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 ...
	I1205 07:35:11.173894  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330: {Name:mk05e994b799e7321fe9fd9419571307eec1a124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.174674  281419 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt
	I1205 07:35:11.174770  281419 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key
	I1205 07:35:11.174852  281419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key
	I1205 07:35:11.174872  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt with IP's: []
	I1205 07:35:11.350910  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt ...
	I1205 07:35:11.350948  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt: {Name:mk7c9be3a839b00f099d02f39817919630f828cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.352352  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key ...
	I1205 07:35:11.352386  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key: {Name:mkf516ee46be6e2698cf5a62147058f957abc08a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.353684  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:11.353744  281419 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:11.353758  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:11.353787  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:11.353817  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:11.353849  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:11.353898  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:11.354490  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:11.381382  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:11.406241  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:11.428183  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:11.450978  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:11.476407  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:11.498851  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:11.519352  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:11.539765  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:11.559484  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:11.579911  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:11.600685  281419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:11.616084  281419 ssh_runner.go:195] Run: openssl version
	I1205 07:35:11.625728  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.635065  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:11.645233  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651040  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651153  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.693810  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.702555  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.710996  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.719477  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:11.727857  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732743  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732862  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.774767  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:11.783345  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:11.791961  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.801063  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:11.809888  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.814918  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.815034  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.857224  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:11.866093  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:11.874706  281419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:11.879598  281419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:11.879697  281419 kubeadm.go:401] StartCluster: {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:11.879803  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:11.879898  281419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:11.908036  281419 cri.go:89] found id: ""
	I1205 07:35:11.908156  281419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:11.919349  281419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:11.928155  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:11.928267  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:11.939709  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:11.939779  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:11.939856  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:11.949257  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:11.949365  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:11.957760  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:11.967055  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:11.967163  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:11.975295  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.984686  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:11.984797  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.994202  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:12.005520  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:12.005606  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:12.026031  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:12.083192  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:35:12.083309  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:35:12.193051  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:35:12.193150  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:35:12.193215  281419 kubeadm.go:319] OS: Linux
	I1205 07:35:12.193261  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:35:12.193313  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:35:12.193374  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:35:12.193426  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:35:12.193479  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:35:12.193529  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:35:12.193578  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:35:12.193684  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:35:12.193786  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:35:12.268365  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:35:12.268486  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:35:12.268582  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:35:12.276338  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:35:10.757563  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.227284144s)
	I1205 07:35:10.757586  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:10.757606  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757654  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757716  282781 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.227556574s)
	I1205 07:35:10.757730  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:10.757745  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:12.017290  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.259613359s)
	I1205 07:35:12.017315  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:12.017333  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:12.017393  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:13.470638  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.453225657s)
	I1205 07:35:13.470663  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:13.470680  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:13.470727  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:12.281185  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:35:12.281356  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:35:12.281459  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:35:12.381667  281419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:35:12.863385  281419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:35:13.114787  281419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:35:13.312565  281419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:35:13.794303  281419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:35:13.794935  281419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.299804  281419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:35:14.300371  281419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.449360  281419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:35:14.671722  281419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:35:15.172052  281419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:35:15.174002  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:35:15.463292  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:35:16.096919  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:35:16.336520  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:35:16.828502  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:35:17.109506  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:35:17.109613  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:35:17.109687  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:35:15.103687  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.632919174s)
	I1205 07:35:15.103711  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:15.103732  282781 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.103783  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.621241  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:15.621272  282781 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:15.621278  282781 cache_images.go:94] duration metric: took 12.466843247s to LoadCachedImages
	I1205 07:35:15.621292  282781 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:15.621381  282781 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:15.621444  282781 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:15.654017  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:35:15.654037  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:15.654053  282781 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:35:15.654081  282781 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:15.654199  282781 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:15.654267  282781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.664199  282781 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:15.664254  282781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.672856  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:15.672884  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:15.672938  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:15.672957  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:15.672855  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:15.672995  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:15.699685  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:15.699722  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:15.699741  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:15.699766  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:15.715022  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:15.749908  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:15.749948  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:16.655429  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:16.670290  282781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:16.693587  282781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:16.711778  282781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:35:16.725821  282781 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:16.730355  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:16.740137  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:16.867916  282781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:16.883411  282781 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:35:16.883478  282781 certs.go:195] generating shared ca certs ...
	I1205 07:35:16.883521  282781 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:16.883711  282781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:16.883800  282781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:16.883837  282781 certs.go:257] generating profile certs ...
	I1205 07:35:16.883935  282781 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:35:16.883965  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt with IP's: []
	I1205 07:35:17.059440  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt ...
	I1205 07:35:17.059534  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt: {Name:mk4216fda7b2560e6bf3adab97ae3109b56cd861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.059812  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key ...
	I1205 07:35:17.059867  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key: {Name:mk6502f52b6a29fc92d89b24a9497a31259c0a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.061509  282781 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:35:17.061580  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:35:17.406723  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 ...
	I1205 07:35:17.406756  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8: {Name:mk48869d32b8a5be7389357c612f9688b7f98edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407538  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 ...
	I1205 07:35:17.407563  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8: {Name:mk39f9d896537098c3c994d4ce7924ee6a49dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407660  282781 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt
	I1205 07:35:17.407739  282781 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key
	I1205 07:35:17.407802  282781 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:35:17.407822  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt with IP's: []
	I1205 07:35:17.656775  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt ...
	I1205 07:35:17.656814  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt: {Name:mkf75c55fc25a5343874cbc403686708a7f26c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657007  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key ...
	I1205 07:35:17.657024  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key: {Name:mk9047fe05ee73b34ef5e42f150f28bde6ac00b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657241  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:17.657291  282781 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:17.657303  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:17.657332  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:17.657363  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:17.657390  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:17.657440  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:17.658030  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:17.677123  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:17.695559  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:17.713701  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:17.731347  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:17.749295  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:17.766915  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:17.783871  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:17.801244  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:17.819265  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:17.836390  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:17.860517  282781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:17.875166  282781 ssh_runner.go:195] Run: openssl version
	I1205 07:35:17.882955  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.891095  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:17.899082  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903708  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903782  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.945497  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.952956  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.960147  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.967438  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:17.974447  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.977974  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.978088  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:18.019263  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:18.027845  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:18.036126  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.044084  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:18.052338  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056629  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056703  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.099363  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:18.107989  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:18.116260  282781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:18.120762  282781 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:18.120819  282781 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:18.120900  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:18.120961  282781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:18.149219  282781 cri.go:89] found id: ""
	I1205 07:35:18.149296  282781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:18.159871  282781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:18.168276  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:18.168340  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:18.176150  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:18.176181  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:18.176234  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:18.184056  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:18.184125  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:18.191302  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:18.198850  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:18.198918  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:18.206439  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.213847  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:18.213913  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.220993  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:18.228433  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:18.228548  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:18.235813  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:18.359095  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:35:18.359647  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:35:18.423544  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:35:17.113932  281419 out.go:252]   - Booting up control plane ...
	I1205 07:35:17.114055  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:35:17.130916  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:35:17.131000  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:35:17.144923  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:35:17.145031  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:35:17.153033  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:35:17.153136  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:35:17.153238  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:35:17.320155  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:35:17.320276  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:17.318333  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000477824s
	I1205 07:39:17.318360  281419 kubeadm.go:319] 
	I1205 07:39:17.318428  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:17.318462  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:17.318567  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:17.318571  281419 kubeadm.go:319] 
	I1205 07:39:17.318675  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:17.318708  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:17.318739  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:39:17.318744  281419 kubeadm.go:319] 
	I1205 07:39:17.323674  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:39:17.324139  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:39:17.324260  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:39:17.324546  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:39:17.324556  281419 kubeadm.go:319] 
	I1205 07:39:17.324629  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 07:39:17.324734  281419 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000477824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:17.324832  281419 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:17.734892  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:17.749336  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:17.749399  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:17.757730  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:17.757790  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:17.757850  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:17.766487  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:17.766564  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:17.774523  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:17.782748  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:17.782816  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:17.790744  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.798734  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:17.798821  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.806627  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:17.814519  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:17.814588  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:17.822487  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:17.863307  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:17.863481  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:17.933763  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:17.933840  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:17.933891  281419 kubeadm.go:319] OS: Linux
	I1205 07:39:17.933940  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:17.933992  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:17.934041  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:17.934092  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:17.934143  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:17.934200  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:17.934250  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:17.934300  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:17.934350  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:18.005121  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:18.005386  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:18.005505  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:18.013422  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:18.015372  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:18.015478  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:18.015552  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:18.015718  281419 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:18.016366  281419 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:18.016626  281419 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:18.017069  281419 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:18.017546  281419 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:18.017846  281419 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:18.018157  281419 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:18.018500  281419 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:18.018795  281419 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:18.018893  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:18.103696  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:18.482070  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:18.757043  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:18.907937  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:19.448057  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:19.448772  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:19.451764  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:19.453331  281419 out.go:252]   - Booting up control plane ...
	I1205 07:39:19.453502  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:19.453624  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:19.454383  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:19.477703  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:19.478043  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:19.486387  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:19.486517  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:19.486561  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:19.636438  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:19.636619  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.111676  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:39:22.111715  282781 kubeadm.go:319] 
	I1205 07:39:22.111850  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:39:22.120229  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.120296  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.120393  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.120460  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.120499  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.120549  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.120597  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.120654  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.120706  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.120774  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.120826  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.120871  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.120918  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.120970  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.121046  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.121144  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.121260  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.121329  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.122793  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.122965  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.123105  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.123184  282781 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:39:22.123243  282781 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:39:22.123304  282781 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:39:22.123355  282781 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:39:22.123409  282781 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:39:22.123531  282781 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123598  282781 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:39:22.123723  282781 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123789  282781 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:39:22.123857  282781 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:39:22.123902  282781 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:39:22.123959  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:22.124010  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:22.124072  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:22.124127  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:22.124191  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:22.124251  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:22.124334  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:22.124401  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:22.125727  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:22.125831  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:22.125912  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:22.125982  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:22.126088  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:22.126182  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:22.126289  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:22.126374  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:22.126419  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:22.126558  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:22.126665  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.126733  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000670148s
	I1205 07:39:22.126738  282781 kubeadm.go:319] 
	I1205 07:39:22.126805  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:22.126840  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:22.126951  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:22.126955  282781 kubeadm.go:319] 
	I1205 07:39:22.127067  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:22.127100  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:22.127131  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 07:39:22.127242  282781 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000670148s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:22.127337  282781 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:22.127648  282781 kubeadm.go:319] 
	I1205 07:39:22.555931  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:22.571474  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:22.571542  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:22.579138  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:22.579159  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:22.579236  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:22.586998  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:22.587095  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:22.597974  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:22.612071  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:22.612169  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:22.620438  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.629905  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:22.629992  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.637890  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:22.646753  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:22.646849  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:22.655118  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:22.694938  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.695040  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.766969  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.767067  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.767130  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.767228  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.767293  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.767344  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.767408  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.767460  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.767518  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.767564  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.767626  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.767685  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.833955  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.834079  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.834176  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.845649  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.848548  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.848634  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.848703  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.848782  282781 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:22.848843  282781 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:22.848912  282781 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:22.848966  282781 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:22.849031  282781 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:22.849092  282781 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:22.849211  282781 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:22.849285  282781 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:22.849326  282781 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:22.849379  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:23.141457  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:23.628614  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:24.042217  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:24.241513  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:24.738880  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:24.739414  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:24.742365  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:24.744249  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:24.744385  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:24.744476  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:24.746446  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:24.766106  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:24.766217  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:24.773547  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:24.773863  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:24.773913  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:24.911724  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:24.911843  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:43:19.629743  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000979602s
	I1205 07:43:19.629776  281419 kubeadm.go:319] 
	I1205 07:43:19.629841  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:19.629881  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:19.629992  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:19.630000  281419 kubeadm.go:319] 
	I1205 07:43:19.630105  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:19.630141  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:19.630176  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:19.630185  281419 kubeadm.go:319] 
	I1205 07:43:19.633703  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:19.634129  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:19.634243  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:19.634512  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:19.634521  281419 kubeadm.go:319] 
	I1205 07:43:19.634601  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:19.634654  281419 kubeadm.go:403] duration metric: took 8m7.754963643s to StartCluster
	I1205 07:43:19.634689  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:19.634770  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:19.664154  281419 cri.go:89] found id: ""
	I1205 07:43:19.664178  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.664186  281419 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:19.664194  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:19.664259  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:19.688943  281419 cri.go:89] found id: ""
	I1205 07:43:19.689027  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.689051  281419 logs.go:284] No container was found matching "etcd"
	I1205 07:43:19.689071  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:19.689145  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:19.714243  281419 cri.go:89] found id: ""
	I1205 07:43:19.714266  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.714278  281419 logs.go:284] No container was found matching "coredns"
	I1205 07:43:19.714285  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:19.714344  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:19.739300  281419 cri.go:89] found id: ""
	I1205 07:43:19.739326  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.739334  281419 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:19.739341  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:19.739409  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:19.764133  281419 cri.go:89] found id: ""
	I1205 07:43:19.764158  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.764168  281419 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:19.764174  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:19.764233  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:19.791591  281419 cri.go:89] found id: ""
	I1205 07:43:19.791655  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.791670  281419 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:19.791678  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:19.791736  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:19.817073  281419 cri.go:89] found id: ""
	I1205 07:43:19.817096  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.817104  281419 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:19.817113  281419 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:19.817124  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:19.884361  281419 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:19.886664  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:43:19.933532  281419 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:19.933565  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:20.000746  281419 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:20.000782  281419 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:20.000794  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:20.048127  281419 logs.go:123] Gathering logs for container status ...
	I1205 07:43:20.048164  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:43:20.079198  281419 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:20.079257  281419 out.go:285] * 
	W1205 07:43:20.079339  281419 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.079395  281419 out.go:285] * 
	W1205 07:43:20.081583  281419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:20.084896  281419 out.go:203] 
	W1205 07:43:20.086596  281419 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.086704  281419 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:20.086780  281419 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:20.088336  281419 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:35:00 no-preload-241270 containerd[758]: time="2025-12-05T07:35:00.872186619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.701941885Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.704289218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.722125402Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.722911774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.075081950Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.078766218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.099917836Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.100531825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.790505473Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.792674113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.806940960Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.807327368Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.207463637Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.209905191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.218221241Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.219001377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.595991834Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.598386708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.607030393Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.608072538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.091545558Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.093932416Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.108389516Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.108843487Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:24.494822    5910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:24.495567    5910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:24.497275    5910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:24.497892    5910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:24.499485    5910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:43:24 up  2:25,  0 user,  load average: 0.69, 1.05, 1.70
	Linux no-preload-241270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:43:21 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:22 no-preload-241270 kubelet[5681]: E1205 07:43:22.182402    5681 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:22 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:22 no-preload-241270 kubelet[5777]: E1205 07:43:22.932436    5777 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:22 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:23 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 05 07:43:23 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:23 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:23 no-preload-241270 kubelet[5808]: E1205 07:43:23.690468    5808 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:23 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:23 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:43:24 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 05 07:43:24 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:24 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:43:24 no-preload-241270 kubelet[5900]: E1205 07:43:24.402385    5900 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:43:24 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:43:24 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 6 (360.687626ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:43:24.998334  294794 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (98.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.430315239s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-241270 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-241270 describe deploy/metrics-server -n kube-system: exit status 1 (58.467711ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-241270" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-241270 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-241270
helpers_test.go:243: (dbg) docker inspect no-preload-241270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	        "Created": "2025-12-05T07:34:52.488952391Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281858,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:34:52.549450094Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hosts",
	        "LogPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896-json.log",
	        "Name": "/no-preload-241270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	                "LowerDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-241270",
	                "Source": "/var/lib/docker/volumes/no-preload-241270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241270",
	                "name.minikube.sigs.k8s.io": "no-preload-241270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eef7bdd89ca732078c94f4927e3c7a21319eafbef30f0346d5566202053e4aac",
	            "SandboxKey": "/var/run/docker/netns/eef7bdd89ca7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:e5:39:6f:c0:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "509cbc0434c71e77097af60a2b0ce9a4473551172a41d0f484ec4e134db3ab73",
	                    "EndpointID": "3e81b46f5657325d06de99919670a1c40d711f2851cee0f84aa291f2a1c6cc3d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241270",
	                        "419e4a267ba5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270: exit status 6 (332.238606ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:45:01.842859  297014 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241270 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p cert-expiration-379442                                                                                                                                                                                                                                  │ cert-expiration-379442       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:31 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-083143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p default-k8s-diff-port-083143 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-861489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p embed-certs-861489 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:34:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:34:54.564320  282781 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:34:54.564546  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564575  282781 out.go:374] Setting ErrFile to fd 2...
	I1205 07:34:54.564598  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:34:54.564902  282781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:34:54.565440  282781 out.go:368] Setting JSON to false
	I1205 07:34:54.566401  282781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8241,"bootTime":1764911853,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:34:54.566509  282781 start.go:143] virtualization:  
	I1205 07:34:54.570672  282781 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:34:54.575010  282781 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:34:54.575073  282781 notify.go:221] Checking for updates...
	I1205 07:34:54.579441  282781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:34:54.582467  282781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:34:54.587377  282781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:34:54.590331  282781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:34:54.593234  282781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:34:54.596734  282781 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:54.596829  282781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:34:54.638746  282781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:34:54.638881  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.723110  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-05 07:34:54.71373112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.723208  282781 docker.go:319] overlay module found
	I1205 07:34:54.726530  282781 out.go:179] * Using the docker driver based on user configuration
	I1205 07:34:54.729826  282781 start.go:309] selected driver: docker
	I1205 07:34:54.729851  282781 start.go:927] validating driver "docker" against <nil>
	I1205 07:34:54.729865  282781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:34:54.730603  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:34:54.814061  282781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:34:54.80392623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:34:54.814216  282781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1205 07:34:54.814233  282781 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 07:34:54.814448  282781 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:34:54.817656  282781 out.go:179] * Using Docker driver with root privileges
	I1205 07:34:54.820449  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:34:54.820517  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:34:54.820533  282781 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:34:54.820632  282781 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:34:54.823652  282781 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:34:54.826400  282781 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:34:54.829321  282781 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:34:54.832159  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:54.832346  282781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:34:54.866220  282781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:34:54.866240  282781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:34:54.905418  282781 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:34:55.127272  282781 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:34:55.127472  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:34:55.127510  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json: {Name:mk199da181ecffa13d15cfa2c7c654b0a370d7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:55.127517  282781 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127770  282781 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127814  282781 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.127984  282781 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128114  282781 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128248  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:34:55.128265  282781 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 153.635µs
	I1205 07:34:55.128280  282781 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128249  282781 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128370  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:34:55.128400  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:34:55.128415  282781 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 907.013µs
	I1205 07:34:55.128428  282781 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:34:55.128407  282781 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 179.719µs
	I1205 07:34:55.128464  282781 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128383  282781 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:34:55.128510  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:34:55.128522  282781 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 71.566µs
	I1205 07:34:55.128528  282781 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:34:55.128441  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:34:55.128638  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:34:55.128687  282781 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 705.903µs
	I1205 07:34:55.128729  282781 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:34:55.128474  282781 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:34:55.128644  282781 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 879.419µs
	I1205 07:34:55.128808  282781 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128298  282781 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128601  282781 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:34:55.128935  282781 start.go:364] duration metric: took 65.568µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:34:55.128666  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:34:55.128988  282781 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 1.179238ms
	I1205 07:34:55.129009  282781 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:34:55.128849  282781 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:34:55.129040  282781 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 743.557µs
	I1205 07:34:55.129066  282781 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:34:55.129099  282781 cache.go:87] Successfully saved all images to host disk.
	I1205 07:34:55.128980  282781 start.go:93] Provisioning new machine with config: &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:34:55.129144  282781 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:34:51.482132  281419 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:51.482359  281419 start.go:159] libmachine.API.Create for "no-preload-241270" (driver="docker")
	I1205 07:34:51.482388  281419 client.go:173] LocalClient.Create starting
	I1205 07:34:51.482463  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:51.482494  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482510  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482565  281419 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:51.482581  281419 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:51.482597  281419 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:51.482961  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:51.498656  281419 cli_runner.go:211] docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:51.498737  281419 network_create.go:284] running [docker network inspect no-preload-241270] to gather additional debugging logs...
	I1205 07:34:51.498754  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270
	W1205 07:34:51.515396  281419 cli_runner.go:211] docker network inspect no-preload-241270 returned with exit code 1
	I1205 07:34:51.515424  281419 network_create.go:287] error running [docker network inspect no-preload-241270]: docker network inspect no-preload-241270: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-241270 not found
	I1205 07:34:51.515453  281419 network_create.go:289] output of [docker network inspect no-preload-241270]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-241270 not found
	
	** /stderr **
	I1205 07:34:51.515547  281419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:51.540706  281419 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:51.541027  281419 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:51.541392  281419 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:51.541780  281419 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3e30}
	I1205 07:34:51.541797  281419 network_create.go:124] attempt to create docker network no-preload-241270 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1205 07:34:51.541855  281419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-241270 no-preload-241270
	I1205 07:34:51.644579  281419 network_create.go:108] docker network no-preload-241270 192.168.76.0/24 created
	I1205 07:34:51.644609  281419 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-241270" container
	I1205 07:34:51.644693  281419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:51.664403  281419 cli_runner.go:164] Run: docker volume create no-preload-241270 --label name.minikube.sigs.k8s.io=no-preload-241270 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:51.703596  281419 oci.go:103] Successfully created a docker volume no-preload-241270
	I1205 07:34:51.703699  281419 cli_runner.go:164] Run: docker run --rm --name no-preload-241270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --entrypoint /usr/bin/test -v no-preload-241270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:52.419093  281419 oci.go:107] Successfully prepared a docker volume no-preload-241270
	I1205 07:34:52.419152  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:52.419281  281419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:52.419402  281419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:52.474323  281419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-241270 --name no-preload-241270 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241270 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-241270 --network no-preload-241270 --ip 192.168.76.2 --volume no-preload-241270:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:52.844284  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Running}}
	I1205 07:34:52.871353  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:52.893044  281419 cli_runner.go:164] Run: docker exec no-preload-241270 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:52.971944  281419 oci.go:144] the created container "no-preload-241270" has a running status.
	I1205 07:34:52.971975  281419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa...
	I1205 07:34:53.768668  281419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:53.945530  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:53.965986  281419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:53.966005  281419 kic_runner.go:114] Args: [docker exec --privileged no-preload-241270 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:54.059371  281419 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:34:54.108271  281419 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:54.108367  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.132985  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.133345  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.133356  281419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:54.333364  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.333388  281419 ubuntu.go:182] provisioning hostname "no-preload-241270"
	I1205 07:34:54.333541  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.369719  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.371863  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.371893  281419 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-241270 && echo "no-preload-241270" | sudo tee /etc/hostname
	I1205 07:34:54.574524  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:34:54.574606  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:54.599195  281419 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:54.599492  281419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1205 07:34:54.599509  281419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241270/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:34:54.776549  281419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:34:54.776662  281419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:34:54.776695  281419 ubuntu.go:190] setting up certificates
	I1205 07:34:54.776705  281419 provision.go:84] configureAuth start
	I1205 07:34:54.776772  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:54.802455  281419 provision.go:143] copyHostCerts
	I1205 07:34:54.802525  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:34:54.802534  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:34:54.802614  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:34:54.802700  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:34:54.802706  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:34:54.802735  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:34:54.802784  281419 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:34:54.802797  281419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:34:54.802821  281419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:34:54.802868  281419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.no-preload-241270 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-241270]
	I1205 07:34:55.021879  281419 provision.go:177] copyRemoteCerts
	I1205 07:34:55.021961  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:34:55.022007  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.042198  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.146207  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:34:55.175055  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:34:55.196310  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:34:55.228238  281419 provision.go:87] duration metric: took 451.519136ms to configureAuth
	I1205 07:34:55.228267  281419 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:34:55.228447  281419 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:34:55.228461  281419 machine.go:97] duration metric: took 1.120172831s to provisionDockerMachine
	I1205 07:34:55.228468  281419 client.go:176] duration metric: took 3.746074827s to LocalClient.Create
	I1205 07:34:55.228481  281419 start.go:167] duration metric: took 3.746124256s to libmachine.API.Create "no-preload-241270"
	I1205 07:34:55.228492  281419 start.go:293] postStartSetup for "no-preload-241270" (driver="docker")
	I1205 07:34:55.228503  281419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:34:55.228562  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:34:55.228610  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.249980  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.367085  281419 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:34:55.370694  281419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:34:55.370723  281419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:34:55.370734  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:34:55.370886  281419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:34:55.371031  281419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:34:55.371195  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:34:55.385389  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:34:55.415204  281419 start.go:296] duration metric: took 186.696466ms for postStartSetup
	I1205 07:34:55.415546  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.445124  281419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:34:55.445421  281419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:34:55.445469  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.465824  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.582588  281419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:34:55.589753  281419 start.go:128] duration metric: took 4.113009855s to createHost
	I1205 07:34:55.589783  281419 start.go:83] releasing machines lock for "no-preload-241270", held for 4.11313674s
	I1205 07:34:55.589860  281419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:34:55.609280  281419 ssh_runner.go:195] Run: cat /version.json
	I1205 07:34:55.609334  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.609553  281419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:34:55.609603  281419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:34:55.653271  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.667026  281419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:34:55.785816  281419 ssh_runner.go:195] Run: systemctl --version
	I1205 07:34:55.905848  281419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:34:55.913263  281419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:34:55.913352  281419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:34:55.955688  281419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:34:55.955713  281419 start.go:496] detecting cgroup driver to use...
	I1205 07:34:55.955752  281419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:34:55.955807  281419 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:34:55.978957  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:34:55.992668  281419 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:34:55.992774  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:34:56.017505  281419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:34:56.046827  281419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:34:56.209514  281419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:34:56.405533  281419 docker.go:234] disabling docker service ...
	I1205 07:34:56.405600  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:34:56.470263  281419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:34:56.503296  281419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:34:56.815584  281419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:34:57.031532  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:34:57.059667  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:34:57.093975  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:34:57.103230  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:34:57.112469  281419 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:34:57.112537  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:34:57.123144  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.134066  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:34:57.144317  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:34:57.156950  281419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:34:57.168939  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:34:57.179688  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:34:57.190637  281419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:34:57.206793  281419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:34:57.215781  281419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:34:57.226983  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:34:57.420977  281419 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:34:57.514033  281419 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:34:57.514159  281419 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:34:57.519057  281419 start.go:564] Will wait 60s for crictl version
	I1205 07:34:57.519141  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:57.523352  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:34:57.554146  281419 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:34:57.554218  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.577679  281419 ssh_runner.go:195] Run: containerd --version
	I1205 07:34:57.608177  281419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:34:55.134539  282781 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:34:55.134871  282781 start.go:159] libmachine.API.Create for "newest-cni-622440" (driver="docker")
	I1205 07:34:55.134936  282781 client.go:173] LocalClient.Create starting
	I1205 07:34:55.135040  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:34:55.135104  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135129  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135215  282781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:34:55.135272  282781 main.go:143] libmachine: Decoding PEM data...
	I1205 07:34:55.135292  282781 main.go:143] libmachine: Parsing certificate...
	I1205 07:34:55.135778  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:34:55.152795  282781 cli_runner.go:211] docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:34:55.152912  282781 network_create.go:284] running [docker network inspect newest-cni-622440] to gather additional debugging logs...
	I1205 07:34:55.152946  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440
	W1205 07:34:55.170809  282781 cli_runner.go:211] docker network inspect newest-cni-622440 returned with exit code 1
	I1205 07:34:55.170837  282781 network_create.go:287] error running [docker network inspect newest-cni-622440]: docker network inspect newest-cni-622440: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-622440 not found
	I1205 07:34:55.170850  282781 network_create.go:289] output of [docker network inspect newest-cni-622440]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-622440 not found
	
	** /stderr **
	I1205 07:34:55.170942  282781 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:55.190601  282781 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:34:55.190913  282781 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:34:55.191232  282781 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:34:55.191506  282781 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-509cbc0434c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:5b:c8:fd:a0:2d} reservation:<nil>}
	I1205 07:34:55.191883  282781 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ab4b80}
	I1205 07:34:55.191903  282781 network_create.go:124] attempt to create docker network newest-cni-622440 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:34:55.191967  282781 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-622440 newest-cni-622440
	I1205 07:34:55.272466  282781 network_create.go:108] docker network newest-cni-622440 192.168.85.0/24 created
	I1205 07:34:55.272497  282781 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-622440" container
	I1205 07:34:55.272584  282781 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:34:55.299615  282781 cli_runner.go:164] Run: docker volume create newest-cni-622440 --label name.minikube.sigs.k8s.io=newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:34:55.321227  282781 oci.go:103] Successfully created a docker volume newest-cni-622440
	I1205 07:34:55.321330  282781 cli_runner.go:164] Run: docker run --rm --name newest-cni-622440-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --entrypoint /usr/bin/test -v newest-cni-622440:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:34:55.874194  282781 oci.go:107] Successfully prepared a docker volume newest-cni-622440
	I1205 07:34:55.874264  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1205 07:34:55.874410  282781 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:34:55.874535  282781 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:34:55.945833  282781 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-622440 --name newest-cni-622440 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-622440 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-622440 --network newest-cni-622440 --ip 192.168.85.2 --volume newest-cni-622440:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:34:56.334301  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Running}}
	I1205 07:34:56.365095  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.392463  282781 cli_runner.go:164] Run: docker exec newest-cni-622440 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:34:56.460482  282781 oci.go:144] the created container "newest-cni-622440" has a running status.
	I1205 07:34:56.460517  282781 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa...
	I1205 07:34:56.767833  282781 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:34:56.791395  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.811902  282781 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:34:56.811920  282781 kic_runner.go:114] Args: [docker exec --privileged newest-cni-622440 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:34:56.902529  282781 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:34:56.932575  282781 machine.go:94] provisionDockerMachine start ...
	I1205 07:34:56.932686  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:34:56.953532  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:34:56.953863  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:34:56.953871  282781 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:34:56.954513  282781 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43638->127.0.0.1:33093: read: connection reset by peer
	I1205 07:34:57.611218  281419 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:34:57.631313  281419 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:34:57.635595  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:34:57.647819  281419 kubeadm.go:884] updating cluster {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:34:57.647943  281419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:34:57.648012  281419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:34:57.675975  281419 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:34:57.675998  281419 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:34:57.676035  281419 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.676242  281419 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.676321  281419 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.676541  281419 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.676664  281419 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.676744  281419 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.676821  281419 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.677443  281419 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.678747  281419 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:57.679204  281419 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:57.679446  281419 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:57.679490  281419 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:57.679628  281419 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:57.679730  281419 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:57.680191  281419 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:57.680226  281419 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:34:57.993134  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:34:57.993255  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:34:58.022857  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:34:58.022958  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.035702  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:34:58.035816  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.068460  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:34:58.068586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.069026  281419 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:34:58.069090  281419 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:34:58.069183  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.069262  281419 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:34:58.069305  281419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.069349  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.074525  281419 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:34:58.074618  281419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.074694  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.084602  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:34:58.084753  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.093856  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:34:58.093981  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.103085  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.103156  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.103215  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.103214  281419 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:34:58.103271  281419 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.103296  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.115763  281419 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:34:58.115803  281419 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.115854  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.116104  281419 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:34:58.116140  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.154653  281419 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:34:58.154740  281419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.154818  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192178  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.192267  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.192272  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.192322  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.192364  281419 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:34:58.192395  281419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.192421  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:34:58.192479  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.192482  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278470  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:34:58.278568  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.278766  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:34:58.278598  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:34:58.278641  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.278681  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.278865  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387623  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:34:58.387705  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:34:58.387774  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:34:58.387840  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.387886  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.387626  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387984  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:34:58.388070  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.387990  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:58.387931  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:58.453644  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.453792  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:34:58.453804  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:34:58.453889  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 07:34:58.453762  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:34:58.453990  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.454049  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:34:58.454052  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:34:58.453951  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:34:58.453861  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:34:58.454295  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:34:58.453742  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.454372  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:34:58.542254  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.542568  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:34:58.542480  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:34:58.542630  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:34:58.542522  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542738  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:58.542540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:34:58.542768  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:34:58.578716  281419 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.578827  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:34:58.610540  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:34:58.610912  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:34:58.888566  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:34:59.021211  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:59.021289  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1205 07:34:59.068346  281419 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:34:59.068498  281419 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:34:59.068572  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864558  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.795954788s)
	I1205 07:35:00.864602  281419 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:00.864631  281419 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.864683  281419 ssh_runner.go:195] Run: which crictl
	I1205 07:35:00.864739  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.843433798s)
	I1205 07:35:00.864752  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:00.864766  281419 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.864805  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:00.873580  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:00.270776  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.270817  282781 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:35:00.270899  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.371937  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.372299  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.372312  282781 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:35:00.613599  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:35:00.613706  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.642684  282781 main.go:143] libmachine: Using SSH client type: native
	I1205 07:35:00.643012  282781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1205 07:35:00.643028  282781 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:35:00.802014  282781 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:35:00.802045  282781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:35:00.802091  282781 ubuntu.go:190] setting up certificates
	I1205 07:35:00.802110  282781 provision.go:84] configureAuth start
	I1205 07:35:00.802183  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:00.827426  282781 provision.go:143] copyHostCerts
	I1205 07:35:00.827511  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:35:00.827525  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:35:00.827605  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:35:00.827724  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:35:00.827738  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:35:00.827769  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:35:00.827834  282781 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:35:00.827844  282781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:35:00.827871  282781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:35:00.827926  282781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:35:00.956019  282781 provision.go:177] copyRemoteCerts
	I1205 07:35:00.956232  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:35:00.956312  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:00.978988  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.089461  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:35:01.114938  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:35:01.142325  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:35:01.168254  282781 provision.go:87] duration metric: took 366.116888ms to configureAuth
	I1205 07:35:01.168340  282781 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:35:01.168591  282781 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:35:01.168634  282781 machine.go:97] duration metric: took 4.236039989s to provisionDockerMachine
	I1205 07:35:01.168665  282781 client.go:176] duration metric: took 6.033716203s to LocalClient.Create
	I1205 07:35:01.168718  282781 start.go:167] duration metric: took 6.033833045s to libmachine.API.Create "newest-cni-622440"
	I1205 07:35:01.168742  282781 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:35:01.168766  282781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:35:01.168850  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:35:01.168915  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.192294  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.311598  282781 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:35:01.315486  282781 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:35:01.315516  282781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:35:01.315528  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:35:01.315596  282781 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:35:01.315698  282781 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:35:01.315872  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:35:01.326201  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:01.345964  282781 start.go:296] duration metric: took 177.196121ms for postStartSetup
	I1205 07:35:01.346371  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.368578  282781 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:35:01.369047  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:35:01.369150  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.391110  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.495164  282781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:35:01.500376  282781 start.go:128] duration metric: took 6.371211814s to createHost
	I1205 07:35:01.500460  282781 start.go:83] releasing machines lock for "newest-cni-622440", held for 6.371509385s
	I1205 07:35:01.500553  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:35:01.520704  282781 ssh_runner.go:195] Run: cat /version.json
	I1205 07:35:01.520755  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.520758  282781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:35:01.520826  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:01.542832  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.554863  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:01.750909  282781 ssh_runner.go:195] Run: systemctl --version
	I1205 07:35:01.758230  282781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:35:01.763670  282781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:35:01.763742  282781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:35:01.797683  282781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:35:01.797709  282781 start.go:496] detecting cgroup driver to use...
	I1205 07:35:01.797743  282781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:35:01.797800  282781 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:35:01.813916  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:35:01.835990  282781 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:35:01.836078  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:35:01.856191  282781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:35:01.879473  282781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:35:02.016063  282781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:35:02.186714  282781 docker.go:234] disabling docker service ...
	I1205 07:35:02.186836  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:35:02.211433  282781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:35:02.226230  282781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:35:02.421061  282781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:35:02.574247  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:35:02.588525  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:35:02.604182  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:35:02.613394  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:35:02.623017  282781 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:35:02.623089  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:35:02.632544  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.643699  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:35:02.656090  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:35:02.667307  282781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:35:02.675494  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:35:02.685933  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:35:02.697515  282781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:35:02.708706  282781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:35:02.723371  282781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:35:02.736002  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:02.875115  282781 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:35:02.963803  282781 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:35:02.963902  282781 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:35:02.970220  282781 start.go:564] Will wait 60s for crictl version
	I1205 07:35:02.970310  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:02.974813  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:35:03.021266  282781 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:35:03.021367  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.047120  282781 ssh_runner.go:195] Run: containerd --version
	I1205 07:35:03.073256  282781 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:35:03.076375  282781 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:35:03.098294  282781 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:35:03.105202  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:03.120382  282781 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:35:03.123255  282781 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:35:03.123408  282781 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:35:03.123487  282781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:35:03.154394  282781 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:35:03.154422  282781 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:35:03.154478  282781 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.154682  282781 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.154778  282781 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.154866  282781 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.154957  282781 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.155040  282781 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.155127  282781 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.155218  282781 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.156724  282781 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.157068  282781 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.157467  282781 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.157620  282781 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:03.157862  282781 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.158016  282781 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.158145  282781 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.158257  282781 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.462330  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1205 07:35:03.462445  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.474342  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1205 07:35:03.474456  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.482905  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1205 07:35:03.483018  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1205 07:35:03.493712  282781 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:35:03.493818  282781 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.493879  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.495878  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1205 07:35:03.495977  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.503824  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1205 07:35:03.503953  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.548802  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1205 07:35:03.548918  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.563856  282781 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1205 07:35:03.563966  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.564379  282781 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:35:03.564443  282781 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.564494  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564588  282781 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:35:03.564625  282781 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:35:03.564664  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.564745  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.577731  282781 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:35:03.577812  282781 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.577873  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.594067  282781 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:35:03.594158  282781 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.594222  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.638413  282781 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:35:03.638520  282781 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.638583  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.647984  282781 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:35:03.648065  282781 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.648135  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:03.654578  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.654695  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.654792  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.654879  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.654956  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.659132  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:03.660475  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856118  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:03.856393  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:03.856229  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:03.856314  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:35:03.856258  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:03.856389  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:03.856356  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073452  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:35:04.073547  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:35:04.073616  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:35:04.073671  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:35:04.073727  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:35:04.073796  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:04.073863  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:35:04.073966  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:35:04.231226  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231396  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:04.231480  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231559  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:04.231639  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231719  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:04.231791  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:35:04.231868  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:04.231943  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232023  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.232099  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:35:04.232145  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 07:35:04.232313  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:35:04.232384  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:04.287892  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:35:04.287988  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288174  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:35:04.288039  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288247  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:35:04.288060  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288276  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:35:04.288078  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:35:04.288307  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:35:04.288093  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:35:04.288348  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:35:04.288138  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1205 07:35:04.301689  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301786  282781 retry.go:31] will retry after 208.795928ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301815  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301854  282781 retry.go:31] will retry after 334.580121ms: ssh: rejected: connect failed (open failed)
	W1205 07:35:04.301882  282781 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.301902  282781 retry.go:31] will retry after 333.510577ms: ssh: rejected: connect failed (open failed)
	I1205 07:35:04.510761  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.553911  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:02.712615  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.847781055s)
	I1205 07:35:02.712638  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:02.712660  281419 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712732  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:02.712799  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.839195579s)
	I1205 07:35:02.712834  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087126  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.374270081s)
	I1205 07:35:04.087198  281419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.087256  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.374512799s)
	I1205 07:35:04.087266  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:04.087283  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:04.087309  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:05.800879  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.713547867s)
	I1205 07:35:05.800904  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:05.800922  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.800970  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:05.801018  281419 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.713803361s)
	I1205 07:35:05.801061  281419 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:05.801141  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1205 07:35:04.593101  282781 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:35:04.593340  282781 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1205 07:35:04.593425  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.593492  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.620265  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.635918  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.637258  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:35:04.700758  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.710820  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:35:04.947887  282781 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:35:04.947982  282781 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:04.948060  282781 ssh_runner.go:195] Run: which crictl
	I1205 07:35:05.052764  282781 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.052875  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1205 07:35:05.108225  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550590  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:05.550699  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:35:05.550751  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:05.550805  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:35:07.127585  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.576745452s)
	I1205 07:35:07.127612  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:35:07.127630  282781 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127690  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:35:07.127752  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.577096553s)
	I1205 07:35:07.127791  282781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:35:08.530003  282781 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.40218711s)
	I1205 07:35:08.530052  282781 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:35:08.530145  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.530206  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.402499844s)
	I1205 07:35:08.530219  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:35:08.530234  282781 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:08.530258  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:35:07.217489  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.416494396s)
	I1205 07:35:07.217512  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:07.217529  281419 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217586  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:07.217647  281419 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.416497334s)
	I1205 07:35:07.217660  281419 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:07.217673  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:08.607664  281419 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.390055936s)
	I1205 07:35:08.607697  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:08.607718  281419 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:08.607767  281419 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:09.100321  281419 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:09.100358  281419 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:09.100365  281419 cache_images.go:94] duration metric: took 11.42435306s to LoadCachedImages
	I1205 07:35:09.100377  281419 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:09.100482  281419 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-241270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:09.100558  281419 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:09.129301  281419 cni.go:84] Creating CNI manager for ""
	I1205 07:35:09.129326  281419 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:09.129345  281419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:35:09.129377  281419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241270 NodeName:no-preload-241270 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:09.129497  281419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-241270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:09.129569  281419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.142095  281419 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:09.142170  281419 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:09.156065  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:09.156176  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:09.156262  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:09.156299  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:09.156377  281419 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:09.156425  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:09.179830  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:09.179870  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:09.179956  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:09.179975  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:09.180072  281419 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:09.198397  281419 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:09.198485  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:10.286113  281419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:10.299161  281419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:10.316251  281419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:10.331159  281419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 07:35:10.345735  281419 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:10.350335  281419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:10.363402  281419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:10.512811  281419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:10.529558  281419 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270 for IP: 192.168.76.2
	I1205 07:35:10.529629  281419 certs.go:195] generating shared ca certs ...
	I1205 07:35:10.529657  281419 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.529834  281419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:10.529923  281419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:10.529958  281419 certs.go:257] generating profile certs ...
	I1205 07:35:10.530038  281419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key
	I1205 07:35:10.530076  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt with IP's: []
	I1205 07:35:10.853605  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt ...
	I1205 07:35:10.853638  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.crt: {Name:mk2a843840c6e4a2de14fc26103351bbaff83f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.854971  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key ...
	I1205 07:35:10.854994  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key: {Name:mk2141bc22495cb299c026ddfd70c2cab1c5df09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:10.855117  281419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330
	I1205 07:35:10.855143  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1205 07:35:11.172976  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 ...
	I1205 07:35:11.173007  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330: {Name:mk727b4727c68f439905180851e5f305719107ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.173862  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 ...
	I1205 07:35:11.173894  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330: {Name:mk05e994b799e7321fe9fd9419571307eec1a124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.174674  281419 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt
	I1205 07:35:11.174770  281419 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key
	I1205 07:35:11.174852  281419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key
	I1205 07:35:11.174872  281419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt with IP's: []
	I1205 07:35:11.350910  281419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt ...
	I1205 07:35:11.350948  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt: {Name:mk7c9be3a839b00f099d02f39817919630f828cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.352352  281419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key ...
	I1205 07:35:11.352386  281419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key: {Name:mkf516ee46be6e2698cf5a62147058f957abc08a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:11.353684  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:11.353744  281419 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:11.353758  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:11.353787  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:11.353817  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:11.353849  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:11.353898  281419 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:11.354490  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:11.381382  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:11.406241  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:11.428183  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:11.450978  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:11.476407  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:11.498851  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:11.519352  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:11.539765  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:11.559484  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:11.579911  281419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:11.600685  281419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:11.616084  281419 ssh_runner.go:195] Run: openssl version
	I1205 07:35:11.625728  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.635065  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:11.645233  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651040  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.651153  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:11.693810  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.702555  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:11.710996  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.719477  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:11.727857  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732743  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.732862  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:11.774767  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:11.783345  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:11.791961  281419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.801063  281419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:11.809888  281419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.814918  281419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.815034  281419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:11.857224  281419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:11.866093  281419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:11.874706  281419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:11.879598  281419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:11.879697  281419 kubeadm.go:401] StartCluster: {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:11.879803  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:11.879898  281419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:11.908036  281419 cri.go:89] found id: ""
	I1205 07:35:11.908156  281419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:11.919349  281419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:11.928155  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:11.928267  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:11.939709  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:11.939779  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:11.939856  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:11.949257  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:11.949365  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:11.957760  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:11.967055  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:11.967163  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:11.975295  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.984686  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:11.984797  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:11.994202  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:12.005520  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:12.005606  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:12.026031  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:12.083192  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:35:12.083309  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:35:12.193051  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:35:12.193150  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:35:12.193215  281419 kubeadm.go:319] OS: Linux
	I1205 07:35:12.193261  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:35:12.193313  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:35:12.193374  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:35:12.193426  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:35:12.193479  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:35:12.193529  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:35:12.193578  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:35:12.193684  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:35:12.193786  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:35:12.268365  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:35:12.268486  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:35:12.268582  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:35:12.276338  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:35:10.757563  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.227284144s)
	I1205 07:35:10.757586  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:35:10.757606  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757654  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:35:10.757716  282781 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.227556574s)
	I1205 07:35:10.757730  282781 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:35:10.757745  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:35:12.017290  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.259613359s)
	I1205 07:35:12.017315  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:35:12.017333  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:12.017393  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:35:13.470638  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.453225657s)
	I1205 07:35:13.470663  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:35:13.470680  282781 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:13.470727  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:35:12.281185  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:35:12.281356  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:35:12.281459  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:35:12.381667  281419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:35:12.863385  281419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:35:13.114787  281419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:35:13.312565  281419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:35:13.794303  281419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:35:13.794935  281419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.299804  281419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:35:14.300371  281419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1205 07:35:14.449360  281419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:35:14.671722  281419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:35:15.172052  281419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:35:15.174002  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:35:15.463292  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:35:16.096919  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:35:16.336520  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:35:16.828502  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:35:17.109506  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:35:17.109613  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:35:17.109687  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:35:15.103687  282781 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.632919174s)
	I1205 07:35:15.103711  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:35:15.103732  282781 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.103783  282781 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:35:15.621241  282781 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:35:15.621272  282781 cache_images.go:125] Successfully loaded all cached images
	I1205 07:35:15.621278  282781 cache_images.go:94] duration metric: took 12.466843247s to LoadCachedImages
	I1205 07:35:15.621292  282781 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:35:15.621381  282781 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:35:15.621444  282781 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:35:15.654017  282781 cni.go:84] Creating CNI manager for ""
	I1205 07:35:15.654037  282781 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:35:15.654053  282781 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:35:15.654081  282781 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:35:15.654199  282781 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:35:15.654267  282781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.664199  282781 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:35:15.664254  282781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:35:15.672856  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:35:15.672884  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:35:15.672938  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:35:15.672957  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:35:15.672855  282781 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:35:15.672995  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:35:15.699685  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:35:15.699722  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:35:15.699741  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:35:15.699766  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:35:15.715022  282781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:35:15.749908  282781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:35:15.749948  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:35:16.655429  282781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:35:16.670290  282781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:35:16.693587  282781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:35:16.711778  282781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:35:16.725821  282781 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:35:16.730355  282781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:35:16.740137  282781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:35:16.867916  282781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:35:16.883411  282781 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:35:16.883478  282781 certs.go:195] generating shared ca certs ...
	I1205 07:35:16.883521  282781 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:16.883711  282781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:35:16.883800  282781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:35:16.883837  282781 certs.go:257] generating profile certs ...
	I1205 07:35:16.883935  282781 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:35:16.883965  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt with IP's: []
	I1205 07:35:17.059440  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt ...
	I1205 07:35:17.059534  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.crt: {Name:mk4216fda7b2560e6bf3adab97ae3109b56cd861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.059812  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key ...
	I1205 07:35:17.059867  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key: {Name:mk6502f52b6a29fc92d89b24a9497a31259c0a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.061509  282781 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:35:17.061580  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:35:17.406723  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 ...
	I1205 07:35:17.406756  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8: {Name:mk48869d32b8a5be7389357c612f9688b7f98edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407538  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 ...
	I1205 07:35:17.407563  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8: {Name:mk39f9d896537098c3c994d4ce7924ee6a49dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.407660  282781 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt
	I1205 07:35:17.407739  282781 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key
	I1205 07:35:17.407802  282781 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:35:17.407822  282781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt with IP's: []
	I1205 07:35:17.656775  282781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt ...
	I1205 07:35:17.656814  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt: {Name:mkf75c55fc25a5343874cbc403686708a7f26c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657007  282781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key ...
	I1205 07:35:17.657024  282781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key: {Name:mk9047fe05ee73b34ef5e42f150f28bde6ac00b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:35:17.657241  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:35:17.657291  282781 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:35:17.657303  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:35:17.657332  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:35:17.657363  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:35:17.657390  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:35:17.657440  282781 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:35:17.658030  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:35:17.677123  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:35:17.695559  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:35:17.713701  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:35:17.731347  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:35:17.749295  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:35:17.766915  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:35:17.783871  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:35:17.801244  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:35:17.819265  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:35:17.836390  282781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:35:17.860517  282781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:35:17.875166  282781 ssh_runner.go:195] Run: openssl version
	I1205 07:35:17.882955  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.891095  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:35:17.899082  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903708  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.903782  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:35:17.945497  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.952956  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:35:17.960147  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.967438  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:35:17.974447  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.977974  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:17.978088  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:35:18.019263  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:35:18.027845  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:35:18.036126  282781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.044084  282781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:35:18.052338  282781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056629  282781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.056703  282781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:35:18.099363  282781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:35:18.107989  282781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:35:18.116260  282781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:35:18.120762  282781 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:35:18.120819  282781 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:35:18.120900  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:35:18.120961  282781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:35:18.149219  282781 cri.go:89] found id: ""
	I1205 07:35:18.149296  282781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:35:18.159871  282781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:35:18.168276  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:35:18.168340  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:35:18.176150  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:35:18.176181  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:35:18.176234  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:35:18.184056  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:35:18.184125  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:35:18.191302  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:35:18.198850  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:35:18.198918  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:35:18.206439  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.213847  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:35:18.213913  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:35:18.220993  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:35:18.228433  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:35:18.228548  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:35:18.235813  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:35:18.359095  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:35:18.359647  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:35:18.423544  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:35:17.113932  281419 out.go:252]   - Booting up control plane ...
	I1205 07:35:17.114055  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:35:17.130916  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:35:17.131000  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:35:17.144923  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:35:17.145031  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:35:17.153033  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:35:17.153136  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:35:17.153238  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:35:17.320155  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:35:17.320276  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:17.318333  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000477824s
	I1205 07:39:17.318360  281419 kubeadm.go:319] 
	I1205 07:39:17.318428  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:17.318462  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:17.318567  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:17.318571  281419 kubeadm.go:319] 
	I1205 07:39:17.318675  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:17.318708  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:17.318739  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:39:17.318744  281419 kubeadm.go:319] 
	I1205 07:39:17.323674  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:39:17.324139  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:39:17.324260  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:39:17.324546  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:39:17.324556  281419 kubeadm.go:319] 
	I1205 07:39:17.324629  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1205 07:39:17.324734  281419 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-241270] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000477824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:17.324832  281419 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:17.734892  281419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:17.749336  281419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:17.749399  281419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:17.757730  281419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:17.757790  281419 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:17.757850  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:17.766487  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:17.766564  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:17.774523  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:17.782748  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:17.782816  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:17.790744  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.798734  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:17.798821  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:17.806627  281419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:17.814519  281419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:17.814588  281419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:17.822487  281419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:17.863307  281419 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:17.863481  281419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:17.933763  281419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:17.933840  281419 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:17.933891  281419 kubeadm.go:319] OS: Linux
	I1205 07:39:17.933940  281419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:17.933992  281419 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:17.934041  281419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:17.934092  281419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:17.934143  281419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:17.934200  281419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:17.934250  281419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:17.934300  281419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:17.934350  281419 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:18.005121  281419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:18.005386  281419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:18.005505  281419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:18.013422  281419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:18.015372  281419 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:18.015478  281419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:18.015552  281419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:18.015718  281419 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:18.016366  281419 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:18.016626  281419 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:18.017069  281419 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:18.017546  281419 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:18.017846  281419 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:18.018157  281419 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:18.018500  281419 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:18.018795  281419 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:18.018893  281419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:18.103696  281419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:18.482070  281419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:18.757043  281419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:18.907937  281419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:19.448057  281419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:19.448772  281419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:19.451764  281419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:19.453331  281419 out.go:252]   - Booting up control plane ...
	I1205 07:39:19.453502  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:19.453624  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:19.454383  281419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:19.477703  281419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:19.478043  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:19.486387  281419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:19.486517  281419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:19.486561  281419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:19.636438  281419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:19.636619  281419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.111676  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:39:22.111715  282781 kubeadm.go:319] 
	I1205 07:39:22.111850  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:39:22.120229  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.120296  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.120393  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.120460  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.120499  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.120549  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.120597  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.120654  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.120706  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.120774  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.120826  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.120871  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.120918  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.120970  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.121046  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.121144  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.121260  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.121329  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.122793  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.122965  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.123105  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.123184  282781 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:39:22.123243  282781 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:39:22.123304  282781 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:39:22.123355  282781 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:39:22.123409  282781 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:39:22.123531  282781 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123598  282781 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:39:22.123723  282781 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:39:22.123789  282781 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:39:22.123857  282781 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:39:22.123902  282781 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:39:22.123959  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:22.124010  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:22.124072  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:22.124127  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:22.124191  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:22.124251  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:22.124334  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:22.124401  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:22.125727  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:22.125831  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:22.125912  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:22.125982  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:22.126088  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:22.126182  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:22.126289  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:22.126374  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:22.126419  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:22.126558  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:22.126665  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:39:22.126733  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000670148s
	I1205 07:39:22.126738  282781 kubeadm.go:319] 
	I1205 07:39:22.126805  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:39:22.126840  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:39:22.126951  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:39:22.126955  282781 kubeadm.go:319] 
	I1205 07:39:22.127067  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:39:22.127100  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:39:22.127131  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 07:39:22.127242  282781 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-622440] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000670148s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:39:22.127337  282781 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1205 07:39:22.127648  282781 kubeadm.go:319] 
	I1205 07:39:22.555931  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:39:22.571474  282781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:39:22.571542  282781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:39:22.579138  282781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:39:22.579159  282781 kubeadm.go:158] found existing configuration files:
	
	I1205 07:39:22.579236  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:39:22.586998  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:39:22.587095  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:39:22.597974  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:39:22.612071  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:39:22.612169  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:39:22.620438  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.629905  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:39:22.629992  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:39:22.637890  282781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:39:22.646753  282781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:39:22.646849  282781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:39:22.655118  282781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:39:22.694938  282781 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:39:22.695040  282781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:39:22.766969  282781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:39:22.767067  282781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:39:22.767130  282781 kubeadm.go:319] OS: Linux
	I1205 07:39:22.767228  282781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:39:22.767293  282781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:39:22.767344  282781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:39:22.767408  282781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:39:22.767460  282781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:39:22.767518  282781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:39:22.767564  282781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:39:22.767626  282781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:39:22.767685  282781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:39:22.833955  282781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:39:22.834079  282781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:39:22.834176  282781 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:39:22.845649  282781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:39:22.848548  282781 out.go:252]   - Generating certificates and keys ...
	I1205 07:39:22.848634  282781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:39:22.848703  282781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:39:22.848782  282781 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:39:22.848843  282781 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:39:22.848912  282781 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:39:22.848966  282781 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:39:22.849031  282781 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:39:22.849092  282781 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:39:22.849211  282781 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:39:22.849285  282781 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:39:22.849326  282781 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:39:22.849379  282781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:39:23.141457  282781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:39:23.628614  282781 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:39:24.042217  282781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:39:24.241513  282781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:39:24.738880  282781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:39:24.739414  282781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:39:24.742365  282781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:39:24.744249  282781 out.go:252]   - Booting up control plane ...
	I1205 07:39:24.744385  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:39:24.744476  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:39:24.746446  282781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:39:24.766106  282781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:39:24.766217  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:39:24.773547  282781 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:39:24.773863  282781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:39:24.773913  282781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:39:24.911724  282781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:39:24.911843  282781 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:43:19.629743  281419 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000979602s
	I1205 07:43:19.629776  281419 kubeadm.go:319] 
	I1205 07:43:19.629841  281419 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:19.629881  281419 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:19.629992  281419 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:19.630000  281419 kubeadm.go:319] 
	I1205 07:43:19.630105  281419 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:19.630141  281419 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:19.630176  281419 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:19.630185  281419 kubeadm.go:319] 
	I1205 07:43:19.633703  281419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:19.634129  281419 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:19.634243  281419 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:19.634512  281419 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:19.634521  281419 kubeadm.go:319] 
	I1205 07:43:19.634601  281419 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:19.634654  281419 kubeadm.go:403] duration metric: took 8m7.754963643s to StartCluster
	I1205 07:43:19.634689  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:19.634770  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:19.664154  281419 cri.go:89] found id: ""
	I1205 07:43:19.664178  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.664186  281419 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:19.664194  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:19.664259  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:19.688943  281419 cri.go:89] found id: ""
	I1205 07:43:19.689027  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.689051  281419 logs.go:284] No container was found matching "etcd"
	I1205 07:43:19.689071  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:19.689145  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:19.714243  281419 cri.go:89] found id: ""
	I1205 07:43:19.714266  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.714278  281419 logs.go:284] No container was found matching "coredns"
	I1205 07:43:19.714285  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:19.714344  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:19.739300  281419 cri.go:89] found id: ""
	I1205 07:43:19.739326  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.739334  281419 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:19.739341  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:19.739409  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:19.764133  281419 cri.go:89] found id: ""
	I1205 07:43:19.764158  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.764168  281419 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:19.764174  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:19.764233  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:19.791591  281419 cri.go:89] found id: ""
	I1205 07:43:19.791655  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.791670  281419 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:19.791678  281419 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:19.791736  281419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:19.817073  281419 cri.go:89] found id: ""
	I1205 07:43:19.817096  281419 logs.go:282] 0 containers: []
	W1205 07:43:19.817104  281419 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:19.817113  281419 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:19.817124  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:19.884361  281419 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:19.886664  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:43:19.933532  281419 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:19.933565  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:20.000746  281419 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:19.992942    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.993761    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995468    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.995759    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:19.997362    5526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:20.000782  281419 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:20.000794  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:20.048127  281419 logs.go:123] Gathering logs for container status ...
	I1205 07:43:20.048164  281419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:43:20.079198  281419 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:20.079257  281419 out.go:285] * 
	W1205 07:43:20.079339  281419 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.079395  281419 out.go:285] * 
	W1205 07:43:20.081583  281419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:20.084896  281419 out.go:203] 
	W1205 07:43:20.086596  281419 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000979602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:20.086704  281419 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:20.086780  281419 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:20.088336  281419 out.go:203] 
	I1205 07:43:24.912154  282781 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000692536s
	I1205 07:43:24.912179  282781 kubeadm.go:319] 
	I1205 07:43:24.912237  282781 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:43:24.912269  282781 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:43:24.912374  282781 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:43:24.912378  282781 kubeadm.go:319] 
	I1205 07:43:24.912483  282781 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:43:24.912515  282781 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:43:24.912545  282781 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:43:24.912549  282781 kubeadm.go:319] 
	I1205 07:43:24.918373  282781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:43:24.918871  282781 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:43:24.919001  282781 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:43:24.919288  282781 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:43:24.919298  282781 kubeadm.go:319] 
	I1205 07:43:24.919374  282781 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:43:24.919431  282781 kubeadm.go:403] duration metric: took 8m6.798617744s to StartCluster
	I1205 07:43:24.919465  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:43:24.919523  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:43:24.960538  282781 cri.go:89] found id: ""
	I1205 07:43:24.960597  282781 logs.go:282] 0 containers: []
	W1205 07:43:24.960612  282781 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:43:24.960628  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:43:24.960720  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:43:25.008615  282781 cri.go:89] found id: ""
	I1205 07:43:25.008645  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.008654  282781 logs.go:284] No container was found matching "etcd"
	I1205 07:43:25.008660  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:43:25.008731  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:43:25.051444  282781 cri.go:89] found id: ""
	I1205 07:43:25.051465  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.051473  282781 logs.go:284] No container was found matching "coredns"
	I1205 07:43:25.051479  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:43:25.051537  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:43:25.082467  282781 cri.go:89] found id: ""
	I1205 07:43:25.082489  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.082555  282781 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:43:25.082563  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:43:25.082640  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:43:25.147881  282781 cri.go:89] found id: ""
	I1205 07:43:25.147902  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.147911  282781 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:43:25.147917  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:43:25.147976  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:43:25.224329  282781 cri.go:89] found id: ""
	I1205 07:43:25.224361  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.224370  282781 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:43:25.224378  282781 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:43:25.224434  282781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:43:25.250842  282781 cri.go:89] found id: ""
	I1205 07:43:25.250870  282781 logs.go:282] 0 containers: []
	W1205 07:43:25.250879  282781 logs.go:284] No container was found matching "kindnet"
	I1205 07:43:25.250889  282781 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:43:25.250901  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:43:25.319837  282781 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:43:25.312291    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.313007    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314611    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314898    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.316383    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:43:25.312291    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.313007    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314611    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.314898    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:43:25.316383    5506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:43:25.319857  282781 logs.go:123] Gathering logs for containerd ...
	I1205 07:43:25.319870  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:43:25.371742  282781 logs.go:123] Gathering logs for container status ...
	I1205 07:43:25.371978  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:43:25.409796  282781 logs.go:123] Gathering logs for kubelet ...
	I1205 07:43:25.409818  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:43:25.474308  282781 logs.go:123] Gathering logs for dmesg ...
	I1205 07:43:25.474345  282781 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:43:25.487408  282781 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:43:25.487510  282781 out.go:285] * 
	W1205 07:43:25.487601  282781 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:25.487658  282781 out.go:285] * 
	W1205 07:43:25.490185  282781 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:43:25.493272  282781 out.go:203] 
	W1205 07:43:25.494648  282781 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000692536s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:43:25.494700  282781 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:43:25.494737  282781 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:43:25.496566  282781 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:35:00 no-preload-241270 containerd[758]: time="2025-12-05T07:35:00.872186619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.701941885Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.704289218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.722125402Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:02 no-preload-241270 containerd[758]: time="2025-12-05T07:35:02.722911774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.075081950Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.078766218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.099917836Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:04 no-preload-241270 containerd[758]: time="2025-12-05T07:35:04.100531825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.790505473Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.792674113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.806940960Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:05 no-preload-241270 containerd[758]: time="2025-12-05T07:35:05.807327368Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.207463637Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.209905191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.218221241Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:07 no-preload-241270 containerd[758]: time="2025-12-05T07:35:07.219001377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.595991834Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.598386708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.607030393Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 no-preload-241270 containerd[758]: time="2025-12-05T07:35:08.608072538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.091545558Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.093932416Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.108389516Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:09 no-preload-241270 containerd[758]: time="2025-12-05T07:35:09.108843487Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:45:02.528035    6888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:45:02.528931    6888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:45:02.530799    6888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:45:02.531545    6888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:45:02.533134    6888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:45:02 up  2:27,  0 user,  load average: 0.89, 1.08, 1.65
	Linux no-preload-241270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:44:58 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:44:59 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 453.
	Dec 05 07:44:59 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:44:59 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:44:59 no-preload-241270 kubelet[6767]: E1205 07:44:59.650642    6767 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:44:59 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:44:59 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:45:00 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 454.
	Dec 05 07:45:00 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:00 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:00 no-preload-241270 kubelet[6772]: E1205 07:45:00.489949    6772 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:45:00 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:45:00 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:45:01 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 455.
	Dec 05 07:45:01 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:01 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:01 no-preload-241270 kubelet[6784]: E1205 07:45:01.409355    6784 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:45:01 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:45:01 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:45:02 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 456.
	Dec 05 07:45:02 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:02 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:02 no-preload-241270 kubelet[6805]: E1205 07:45:02.193790    6805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:45:02 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:45:02 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 6 (420.708093ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:45:03.193822  297232 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (98.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (116.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1205 07:43:39.009515    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:44:14.019461    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m54.64743073s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-622440
helpers_test.go:243: (dbg) docker inspect newest-cni-622440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	        "Created": "2025-12-05T07:34:55.965403434Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:34:56.049476512Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hosts",
	        "LogPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4-json.log",
	        "Name": "/newest-cni-622440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-622440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-622440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	                "LowerDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-622440",
	                "Source": "/var/lib/docker/volumes/newest-cni-622440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-622440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-622440",
	                "name.minikube.sigs.k8s.io": "newest-cni-622440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3f2de86e0a6b922a19395ef639278ce284c7b00e34a68ffb9832a027d78cfb2",
	            "SandboxKey": "/var/run/docker/netns/c3f2de86e0a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-622440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:ac:81:a7:80:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "96c6294e00fc4b96dda84202da479b822dd69419748060a344f1800d21559cfe",
	                    "EndpointID": "a946bb977c5c9cfd0a36319812e5cea73d907a080ae56fc86cef3fb8982f4b72",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-622440",
	                        "9420074472d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440: exit status 6 (362.435686ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:45:22.470711  299140 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-622440 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p default-k8s-diff-port-083143 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-861489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p embed-certs-861489 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ stop    │ -p no-preload-241270 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p no-preload-241270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:45:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:45:04.741898  297527 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:04.742044  297527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:04.742057  297527 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:04.742062  297527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:04.742346  297527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:45:04.742756  297527 out.go:368] Setting JSON to false
	I1205 07:45:04.743604  297527 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8852,"bootTime":1764911853,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:45:04.743672  297527 start.go:143] virtualization:  
	I1205 07:45:04.745353  297527 out.go:179] * [no-preload-241270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:04.746789  297527 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:04.746911  297527 notify.go:221] Checking for updates...
	I1205 07:45:04.749277  297527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:04.750300  297527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:04.751350  297527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:45:04.752611  297527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:04.753666  297527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:04.755353  297527 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:04.755968  297527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:04.781283  297527 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:04.781395  297527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:04.846713  297527 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:45:04.837015499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:04.846825  297527 docker.go:319] overlay module found
	I1205 07:45:04.848256  297527 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:04.849414  297527 start.go:309] selected driver: docker
	I1205 07:45:04.849427  297527 start.go:927] validating driver "docker" against &{Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:04.849510  297527 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:04.850207  297527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:04.905282  297527 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:45:04.89574809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:04.905644  297527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:45:04.905674  297527 cni.go:84] Creating CNI manager for ""
	I1205 07:45:04.905729  297527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:04.905772  297527 start.go:353] cluster config:
	{Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:04.907336  297527 out.go:179] * Starting "no-preload-241270" primary control-plane node in "no-preload-241270" cluster
	I1205 07:45:04.908619  297527 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:45:04.909942  297527 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:04.911278  297527 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:04.911360  297527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:04.911408  297527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:45:04.911696  297527 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911781  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:45:04.911795  297527 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.205µs
	I1205 07:45:04.911812  297527 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:45:04.911829  297527 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911875  297527 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911911  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:45:04.911918  297527 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 90.077µs
	I1205 07:45:04.911924  297527 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:45:04.911935  297527 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911950  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:45:04.911959  297527 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 94.606µs
	I1205 07:45:04.911964  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:45:04.911967  297527 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:45:04.911970  297527 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 36.456µs
	I1205 07:45:04.911975  297527 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:45:04.911979  297527 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911988  297527 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.912011  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:45:04.912017  297527 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 39.574µs
	I1205 07:45:04.912021  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:45:04.912023  297527 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:45:04.912027  297527 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 43.389µs
	I1205 07:45:04.912032  297527 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:45:04.912034  297527 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.912052  297527 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.912065  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:45:04.912072  297527 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.246µs
	I1205 07:45:04.912078  297527 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:45:04.912081  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:45:04.912086  297527 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.266µs
	I1205 07:45:04.912092  297527 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:45:04.912111  297527 cache.go:87] Successfully saved all images to host disk.
	I1205 07:45:04.931490  297527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:04.931513  297527 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:45:04.931532  297527 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:04.931564  297527 start.go:360] acquireMachinesLock for no-preload-241270: {Name:mk38da592769bcf9f80cfe38cf457b769a394afe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.931618  297527 start.go:364] duration metric: took 35.66µs to acquireMachinesLock for "no-preload-241270"
	I1205 07:45:04.931642  297527 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:04.931648  297527 fix.go:54] fixHost starting: 
	I1205 07:45:04.931902  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:04.949375  297527 fix.go:112] recreateIfNeeded on no-preload-241270: state=Stopped err=<nil>
	W1205 07:45:04.949405  297527 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:45:04.950949  297527 out.go:252] * Restarting existing docker container for "no-preload-241270" ...
	I1205 07:45:04.951034  297527 cli_runner.go:164] Run: docker start no-preload-241270
	I1205 07:45:05.216852  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:05.235890  297527 kic.go:430] container "no-preload-241270" state is running.
	I1205 07:45:05.236264  297527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:45:05.258841  297527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:45:05.259070  297527 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:05.259126  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:05.279161  297527 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:05.279482  297527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1205 07:45:05.279490  297527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:05.280130  297527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50756->127.0.0.1:33098: read: connection reset by peer
	I1205 07:45:08.432794  297527 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:45:08.432818  297527 ubuntu.go:182] provisioning hostname "no-preload-241270"
	I1205 07:45:08.432884  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:08.451184  297527 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:08.451509  297527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1205 07:45:08.451525  297527 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-241270 && echo "no-preload-241270" | sudo tee /etc/hostname
	I1205 07:45:08.610187  297527 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:45:08.610264  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:08.628575  297527 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:08.628883  297527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1205 07:45:08.628900  297527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241270/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:08.777451  297527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:08.777540  297527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:45:08.777572  297527 ubuntu.go:190] setting up certificates
	I1205 07:45:08.777619  297527 provision.go:84] configureAuth start
	I1205 07:45:08.777736  297527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:45:08.794924  297527 provision.go:143] copyHostCerts
	I1205 07:45:08.794996  297527 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:45:08.795005  297527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:45:08.795083  297527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:45:08.795192  297527 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:45:08.795197  297527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:45:08.795222  297527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:45:08.795281  297527 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:45:08.795286  297527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:45:08.795309  297527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:45:08.795359  297527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.no-preload-241270 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-241270]
	I1205 07:45:08.877001  297527 provision.go:177] copyRemoteCerts
	I1205 07:45:08.877073  297527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:08.877113  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:08.894726  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.000993  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:45:09.022057  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:45:09.042245  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:45:09.061318  297527 provision.go:87] duration metric: took 283.659327ms to configureAuth
	I1205 07:45:09.061344  297527 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:09.061595  297527 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:09.061608  297527 machine.go:97] duration metric: took 3.802530887s to provisionDockerMachine
	I1205 07:45:09.061617  297527 start.go:293] postStartSetup for "no-preload-241270" (driver="docker")
	I1205 07:45:09.061646  297527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:09.061708  297527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:09.061761  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:09.079966  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.185389  297527 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:09.188869  297527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:09.188894  297527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:09.188906  297527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:45:09.188962  297527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:45:09.189042  297527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:45:09.189146  297527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:09.196675  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:09.214750  297527 start.go:296] duration metric: took 153.100248ms for postStartSetup
	I1205 07:45:09.214829  297527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:09.214868  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:09.234509  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.338972  297527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:09.343677  297527 fix.go:56] duration metric: took 4.41202113s for fixHost
	I1205 07:45:09.343702  297527 start.go:83] releasing machines lock for "no-preload-241270", held for 4.412070689s
	I1205 07:45:09.343823  297527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:45:09.361505  297527 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:09.361559  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:09.361646  297527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:09.361704  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:09.379923  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.391234  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.569715  297527 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:09.576254  297527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:09.580750  297527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:09.580847  297527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:09.588564  297527 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:09.588636  297527 start.go:496] detecting cgroup driver to use...
	I1205 07:45:09.588673  297527 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:09.588723  297527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:45:09.606328  297527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:45:09.622202  297527 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:09.622277  297527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:09.638550  297527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:09.653051  297527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:09.772617  297527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:09.889348  297527 docker.go:234] disabling docker service ...
	I1205 07:45:09.889427  297527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:09.904346  297527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:09.917622  297527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:10.040200  297527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:10.152544  297527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:10.165439  297527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:10.179564  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:45:10.189347  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:45:10.198827  297527 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:45:10.198955  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:45:10.207791  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:10.216508  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:45:10.225118  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:10.234468  297527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:10.242776  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:45:10.251554  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:45:10.260148  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:45:10.269308  297527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:10.277914  297527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:10.285361  297527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:10.410905  297527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:45:10.500880  297527 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:45:10.501025  297527 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:45:10.505005  297527 start.go:564] Will wait 60s for crictl version
	I1205 07:45:10.505096  297527 ssh_runner.go:195] Run: which crictl
	I1205 07:45:10.508636  297527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:10.534679  297527 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:45:10.534786  297527 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:10.555373  297527 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:10.576157  297527 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:45:10.577470  297527 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:10.597528  297527 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:10.601385  297527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:10.611008  297527 kubeadm.go:884] updating cluster {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:10.611124  297527 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:10.611179  297527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:10.634698  297527 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:45:10.634718  297527 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:10.634726  297527 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:45:10.634828  297527 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-241270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:10.634890  297527 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:45:10.659475  297527 cni.go:84] Creating CNI manager for ""
	I1205 07:45:10.659538  297527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:10.659576  297527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:45:10.659612  297527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241270 NodeName:no-preload-241270 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:10.659747  297527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-241270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:10.659841  297527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:45:10.667533  297527 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:10.667605  297527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:10.675030  297527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:45:10.687622  297527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:45:10.701676  297527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 07:45:10.719568  297527 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:10.723580  297527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:10.734183  297527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:10.856033  297527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:10.872825  297527 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270 for IP: 192.168.76.2
	I1205 07:45:10.872854  297527 certs.go:195] generating shared ca certs ...
	I1205 07:45:10.872897  297527 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:10.873107  297527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:45:10.873257  297527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:45:10.873273  297527 certs.go:257] generating profile certs ...
	I1205 07:45:10.873421  297527 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key
	I1205 07:45:10.873539  297527 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330
	I1205 07:45:10.873622  297527 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key
	I1205 07:45:10.873780  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:45:10.873830  297527 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:10.873858  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:10.873896  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:45:10.873945  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:10.873974  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:10.874054  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:10.874806  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:10.899986  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:10.916573  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:10.934642  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:10.953079  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:45:10.969788  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:10.986475  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:11.004065  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:45:11.024377  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:11.042754  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:45:11.062063  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:45:11.080346  297527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:11.093286  297527 ssh_runner.go:195] Run: openssl version
	I1205 07:45:11.101677  297527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:11.110165  297527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:11.118933  297527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:11.123646  297527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:11.123764  297527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:11.167706  297527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:11.175358  297527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:45:11.183062  297527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:45:11.190505  297527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:45:11.194367  297527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:45:11.194436  297527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:45:11.235889  297527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:11.243256  297527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:45:11.250726  297527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:45:11.257911  297527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:45:11.261666  297527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:45:11.261727  297527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:45:11.303155  297527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:11.311098  297527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:11.315323  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:11.356438  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:11.397372  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:11.438383  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:11.479494  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:11.522908  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:11.569080  297527 kubeadm.go:401] StartCluster: {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:11.569205  297527 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:11.569298  297527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:11.606344  297527 cri.go:89] found id: ""
	I1205 07:45:11.606450  297527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:11.615404  297527 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:11.615424  297527 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:11.615508  297527 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:11.623640  297527 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:11.624128  297527 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:11.624263  297527 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-241270" cluster setting kubeconfig missing "no-preload-241270" context setting]
	I1205 07:45:11.624583  297527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:11.627415  297527 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:11.637226  297527 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1205 07:45:11.637260  297527 kubeadm.go:602] duration metric: took 21.829958ms to restartPrimaryControlPlane
	I1205 07:45:11.637269  297527 kubeadm.go:403] duration metric: took 68.208908ms to StartCluster
	I1205 07:45:11.637303  297527 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:11.637380  297527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:11.638058  297527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:11.638302  297527 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:45:11.638649  297527 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:11.638713  297527 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:11.638807  297527 addons.go:70] Setting storage-provisioner=true in profile "no-preload-241270"
	I1205 07:45:11.638833  297527 addons.go:239] Setting addon storage-provisioner=true in "no-preload-241270"
	I1205 07:45:11.638861  297527 host.go:66] Checking if "no-preload-241270" exists ...
	I1205 07:45:11.639305  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:11.639576  297527 addons.go:70] Setting dashboard=true in profile "no-preload-241270"
	I1205 07:45:11.639602  297527 addons.go:239] Setting addon dashboard=true in "no-preload-241270"
	W1205 07:45:11.639635  297527 addons.go:248] addon dashboard should already be in state true
	I1205 07:45:11.639677  297527 host.go:66] Checking if "no-preload-241270" exists ...
	I1205 07:45:11.640150  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:11.640549  297527 addons.go:70] Setting default-storageclass=true in profile "no-preload-241270"
	I1205 07:45:11.640575  297527 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-241270"
	I1205 07:45:11.640869  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:11.649342  297527 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:11.650688  297527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:11.689326  297527 addons.go:239] Setting addon default-storageclass=true in "no-preload-241270"
	I1205 07:45:11.689364  297527 host.go:66] Checking if "no-preload-241270" exists ...
	I1205 07:45:11.689875  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:11.698805  297527 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:45:11.699979  297527 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:11.699999  297527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:45:11.700063  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:11.712159  297527 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:45:11.713511  297527 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:45:11.715045  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:45:11.715073  297527 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:45:11.715148  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:11.738771  297527 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:11.738793  297527 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:45:11.738903  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:11.747054  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:11.764636  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:11.774908  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:11.871002  297527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:11.907242  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:11.918814  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:11.946069  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:45:11.946108  297527 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:45:11.974547  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:45:11.974583  297527 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:45:12.027329  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:45:12.027368  297527 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:45:12.046838  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:45:12.046863  297527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:45:12.060336  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:45:12.060358  297527 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:45:12.073679  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:45:12.073741  297527 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:45:12.087034  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:45:12.087059  297527 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:45:12.099975  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:45:12.100054  297527 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:45:12.114362  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:12.114427  297527 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:45:12.127930  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:12.601802  297527 node_ready.go:35] waiting up to 6m0s for node "no-preload-241270" to be "Ready" ...
	W1205 07:45:12.602147  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.602175  297527 retry.go:31] will retry after 295.526925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:12.602228  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.602244  297527 retry.go:31] will retry after 231.271581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:12.602442  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.602452  297527 retry.go:31] will retry after 367.435779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.834027  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:12.894864  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.894897  297527 retry.go:31] will retry after 449.347881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.898199  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:12.959840  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.959876  297527 retry.go:31] will retry after 452.29892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.971054  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:13.033404  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.033440  297527 retry.go:31] will retry after 214.476448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.248531  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:13.318057  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.318107  297527 retry.go:31] will retry after 615.086934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.344449  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:13.413298  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:13.463803  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.463839  297527 retry.go:31] will retry after 764.399145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:13.495656  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.495690  297527 retry.go:31] will retry after 674.75543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.933867  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:13.996287  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.996321  297527 retry.go:31] will retry after 1.043054158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:14.171232  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:14.228748  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:14.233505  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:14.233562  297527 retry.go:31] will retry after 795.385246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:14.289269  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:14.289352  297527 retry.go:31] will retry after 521.72183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:14.603077  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:14.811650  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:14.890440  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:14.890471  297527 retry.go:31] will retry after 828.939302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.031427  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:15.041144  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:15.134214  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.134298  297527 retry.go:31] will retry after 876.155433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:15.136562  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.136633  297527 retry.go:31] will retry after 670.908058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.720664  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:15.781108  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.781139  297527 retry.go:31] will retry after 1.485883423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.808382  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:15.871887  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.871920  297527 retry.go:31] will retry after 1.15355264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:16.011375  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:16.072314  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:16.072358  297527 retry.go:31] will retry after 1.604980836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.025791  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:17.092840  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.092875  297527 retry.go:31] will retry after 1.786675103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:17.102459  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:17.267863  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:17.326258  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.326287  297527 retry.go:31] will retry after 3.666462279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.678556  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:17.740156  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.740193  297527 retry.go:31] will retry after 3.530979089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:18.880121  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:18.940467  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:18.940499  297527 retry.go:31] will retry after 5.225523951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:19.103146  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:35:07 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:07.120012036Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:08.515002428Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 05 07:35:08 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:08.517354324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 05 07:35:08 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:08.532799530Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:08 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:08.533584975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:10 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:10.745868119Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 05 07:35:10 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:10.748396074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 05 07:35:10 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:10.767530947Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:10 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:10.768194782Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:12 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:12.006043536Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 05 07:35:12 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:12.008605838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 05 07:35:12 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:12.017778694Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:12 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:12.018958088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:13 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:13.461807200Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 05 07:35:13 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:13.464459398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 05 07:35:13 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:13.483967961Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:13 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:13.484870864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.041255724Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.061998057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.116644129Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.117620386Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.606398197Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.608651268Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.616593045Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 05 07:35:15 newest-cni-622440 containerd[758]: time="2025-12-05T07:35:15.616937615Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:45:23.085247    6740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:45:23.086032    6740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:45:23.087754    6740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:45:23.088318    6740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:45:23.090126    6740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:45:23 up  2:27,  0 user,  load average: 1.20, 1.15, 1.66
	Linux newest-cni-622440 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:45:20 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:45:20 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 05 07:45:20 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:20 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:20 newest-cni-622440 kubelet[6627]: E1205 07:45:20.908467    6627 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:45:20 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:45:20 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:45:21 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 05 07:45:21 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:21 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:21 newest-cni-622440 kubelet[6632]: E1205 07:45:21.654437    6632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:45:21 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:45:21 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:45:22 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 05 07:45:22 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:22 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:22 newest-cni-622440 kubelet[6654]: E1205 07:45:22.439709    6654 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:45:22 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:45:22 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:45:23 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 05 07:45:23 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:23 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:45:23 newest-cni-622440 kubelet[6745]: E1205 07:45:23.178813    6745 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:45:23 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:45:23 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440: exit status 6 (316.274404ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:45:23.540726  299370 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-622440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (116.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (372.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1205 07:45:06.310468    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m7.927013267s)

                                                
                                                
-- stdout --
	* [no-preload-241270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-241270" primary control-plane node in "no-preload-241270" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:45:04.741898  297527 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:04.742044  297527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:04.742057  297527 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:04.742062  297527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:04.742346  297527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:45:04.742756  297527 out.go:368] Setting JSON to false
	I1205 07:45:04.743604  297527 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8852,"bootTime":1764911853,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:45:04.743672  297527 start.go:143] virtualization:  
	I1205 07:45:04.745353  297527 out.go:179] * [no-preload-241270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:04.746789  297527 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:04.746911  297527 notify.go:221] Checking for updates...
	I1205 07:45:04.749277  297527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:04.750300  297527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:04.751350  297527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:45:04.752611  297527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:04.753666  297527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:04.755353  297527 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:04.755968  297527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:04.781283  297527 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:04.781395  297527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:04.846713  297527 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:45:04.837015499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:04.846825  297527 docker.go:319] overlay module found
	I1205 07:45:04.848256  297527 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:04.849414  297527 start.go:309] selected driver: docker
	I1205 07:45:04.849427  297527 start.go:927] validating driver "docker" against &{Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:04.849510  297527 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:04.850207  297527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:04.905282  297527 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:45:04.89574809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:04.905644  297527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:45:04.905674  297527 cni.go:84] Creating CNI manager for ""
	I1205 07:45:04.905729  297527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:04.905772  297527 start.go:353] cluster config:
	{Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:04.907336  297527 out.go:179] * Starting "no-preload-241270" primary control-plane node in "no-preload-241270" cluster
	I1205 07:45:04.908619  297527 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:45:04.909942  297527 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:04.911278  297527 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:04.911360  297527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:04.911408  297527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:45:04.911696  297527 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911781  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:45:04.911795  297527 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.205µs
	I1205 07:45:04.911812  297527 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:45:04.911829  297527 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911875  297527 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911911  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:45:04.911918  297527 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 90.077µs
	I1205 07:45:04.911924  297527 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:45:04.911935  297527 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911950  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:45:04.911959  297527 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 94.606µs
	I1205 07:45:04.911964  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:45:04.911967  297527 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:45:04.911970  297527 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 36.456µs
	I1205 07:45:04.911975  297527 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:45:04.911979  297527 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.911988  297527 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.912011  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:45:04.912017  297527 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 39.574µs
	I1205 07:45:04.912021  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:45:04.912023  297527 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:45:04.912027  297527 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 43.389µs
	I1205 07:45:04.912032  297527 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:45:04.912034  297527 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.912052  297527 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.912065  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:45:04.912072  297527 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.246µs
	I1205 07:45:04.912078  297527 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:45:04.912081  297527 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:45:04.912086  297527 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.266µs
	I1205 07:45:04.912092  297527 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:45:04.912111  297527 cache.go:87] Successfully saved all images to host disk.
	I1205 07:45:04.931490  297527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:04.931513  297527 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:45:04.931532  297527 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:04.931564  297527 start.go:360] acquireMachinesLock for no-preload-241270: {Name:mk38da592769bcf9f80cfe38cf457b769a394afe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:04.931618  297527 start.go:364] duration metric: took 35.66µs to acquireMachinesLock for "no-preload-241270"
	I1205 07:45:04.931642  297527 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:04.931648  297527 fix.go:54] fixHost starting: 
	I1205 07:45:04.931902  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:04.949375  297527 fix.go:112] recreateIfNeeded on no-preload-241270: state=Stopped err=<nil>
	W1205 07:45:04.949405  297527 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:45:04.950949  297527 out.go:252] * Restarting existing docker container for "no-preload-241270" ...
	I1205 07:45:04.951034  297527 cli_runner.go:164] Run: docker start no-preload-241270
	I1205 07:45:05.216852  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:05.235890  297527 kic.go:430] container "no-preload-241270" state is running.
	I1205 07:45:05.236264  297527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:45:05.258841  297527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/config.json ...
	I1205 07:45:05.259070  297527 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:05.259126  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:05.279161  297527 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:05.279482  297527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1205 07:45:05.279490  297527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:05.280130  297527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50756->127.0.0.1:33098: read: connection reset by peer
	I1205 07:45:08.432794  297527 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:45:08.432818  297527 ubuntu.go:182] provisioning hostname "no-preload-241270"
	I1205 07:45:08.432884  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:08.451184  297527 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:08.451509  297527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1205 07:45:08.451525  297527 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-241270 && echo "no-preload-241270" | sudo tee /etc/hostname
	I1205 07:45:08.610187  297527 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-241270
	
	I1205 07:45:08.610264  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:08.628575  297527 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:08.628883  297527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1205 07:45:08.628900  297527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241270/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:08.777451  297527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:08.777540  297527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:45:08.777572  297527 ubuntu.go:190] setting up certificates
	I1205 07:45:08.777619  297527 provision.go:84] configureAuth start
	I1205 07:45:08.777736  297527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:45:08.794924  297527 provision.go:143] copyHostCerts
	I1205 07:45:08.794996  297527 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:45:08.795005  297527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:45:08.795083  297527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:45:08.795192  297527 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:45:08.795197  297527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:45:08.795222  297527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:45:08.795281  297527 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:45:08.795286  297527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:45:08.795309  297527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:45:08.795359  297527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.no-preload-241270 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-241270]
	I1205 07:45:08.877001  297527 provision.go:177] copyRemoteCerts
	I1205 07:45:08.877073  297527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:08.877113  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:08.894726  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.000993  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:45:09.022057  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:45:09.042245  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:45:09.061318  297527 provision.go:87] duration metric: took 283.659327ms to configureAuth
	I1205 07:45:09.061344  297527 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:09.061595  297527 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:09.061608  297527 machine.go:97] duration metric: took 3.802530887s to provisionDockerMachine
	I1205 07:45:09.061617  297527 start.go:293] postStartSetup for "no-preload-241270" (driver="docker")
	I1205 07:45:09.061646  297527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:09.061708  297527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:09.061761  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:09.079966  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.185389  297527 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:09.188869  297527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:09.188894  297527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:09.188906  297527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:45:09.188962  297527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:45:09.189042  297527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:45:09.189146  297527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:09.196675  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:09.214750  297527 start.go:296] duration metric: took 153.100248ms for postStartSetup
	I1205 07:45:09.214829  297527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:09.214868  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:09.234509  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.338972  297527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:09.343677  297527 fix.go:56] duration metric: took 4.41202113s for fixHost
	I1205 07:45:09.343702  297527 start.go:83] releasing machines lock for "no-preload-241270", held for 4.412070689s
	I1205 07:45:09.343823  297527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241270
	I1205 07:45:09.361505  297527 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:09.361559  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:09.361646  297527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:09.361704  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:09.379923  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.391234  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:09.569715  297527 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:09.576254  297527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:09.580750  297527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:09.580847  297527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:09.588564  297527 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:09.588636  297527 start.go:496] detecting cgroup driver to use...
	I1205 07:45:09.588673  297527 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:09.588723  297527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:45:09.606328  297527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:45:09.622202  297527 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:09.622277  297527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:09.638550  297527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:09.653051  297527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:09.772617  297527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:09.889348  297527 docker.go:234] disabling docker service ...
	I1205 07:45:09.889427  297527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:09.904346  297527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:09.917622  297527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:10.040200  297527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:10.152544  297527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:10.165439  297527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:10.179564  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:45:10.189347  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:45:10.198827  297527 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:45:10.198955  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:45:10.207791  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:10.216508  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:45:10.225118  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:10.234468  297527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:10.242776  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:45:10.251554  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:45:10.260148  297527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:45:10.269308  297527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:10.277914  297527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:10.285361  297527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:10.410905  297527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:45:10.500880  297527 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:45:10.501025  297527 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:45:10.505005  297527 start.go:564] Will wait 60s for crictl version
	I1205 07:45:10.505096  297527 ssh_runner.go:195] Run: which crictl
	I1205 07:45:10.508636  297527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:10.534679  297527 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:45:10.534786  297527 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:10.555373  297527 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:10.576157  297527 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:45:10.577470  297527 cli_runner.go:164] Run: docker network inspect no-preload-241270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:10.597528  297527 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:10.601385  297527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:10.611008  297527 kubeadm.go:884] updating cluster {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:10.611124  297527 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:10.611179  297527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:10.634698  297527 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:45:10.634718  297527 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:10.634726  297527 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:45:10.634828  297527 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-241270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:10.634890  297527 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:45:10.659475  297527 cni.go:84] Creating CNI manager for ""
	I1205 07:45:10.659538  297527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:10.659576  297527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:45:10.659612  297527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241270 NodeName:no-preload-241270 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:10.659747  297527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-241270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:10.659841  297527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:45:10.667533  297527 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:10.667605  297527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:10.675030  297527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:45:10.687622  297527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:45:10.701676  297527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1205 07:45:10.719568  297527 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:10.723580  297527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:10.734183  297527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:10.856033  297527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:10.872825  297527 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270 for IP: 192.168.76.2
	I1205 07:45:10.872854  297527 certs.go:195] generating shared ca certs ...
	I1205 07:45:10.872897  297527 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:10.873107  297527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:45:10.873257  297527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:45:10.873273  297527 certs.go:257] generating profile certs ...
	I1205 07:45:10.873421  297527 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/client.key
	I1205 07:45:10.873539  297527 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key.0d209330
	I1205 07:45:10.873622  297527 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key
	I1205 07:45:10.873780  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:45:10.873830  297527 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:10.873858  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:10.873896  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:45:10.873945  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:10.873974  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:10.874054  297527 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:10.874806  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:10.899986  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:10.916573  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:10.934642  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:10.953079  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:45:10.969788  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:10.986475  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:11.004065  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/no-preload-241270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:45:11.024377  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:11.042754  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:45:11.062063  297527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:45:11.080346  297527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:11.093286  297527 ssh_runner.go:195] Run: openssl version
	I1205 07:45:11.101677  297527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:11.110165  297527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:11.118933  297527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:11.123646  297527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:11.123764  297527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:11.167706  297527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:11.175358  297527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:45:11.183062  297527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:45:11.190505  297527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:45:11.194367  297527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:45:11.194436  297527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:45:11.235889  297527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:11.243256  297527 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:45:11.250726  297527 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:45:11.257911  297527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:45:11.261666  297527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:45:11.261727  297527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:45:11.303155  297527 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:11.311098  297527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:11.315323  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:11.356438  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:11.397372  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:11.438383  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:11.479494  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:11.522908  297527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:11.569080  297527 kubeadm.go:401] StartCluster: {Name:no-preload-241270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-241270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:11.569205  297527 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:11.569298  297527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:11.606344  297527 cri.go:89] found id: ""
	I1205 07:45:11.606450  297527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:11.615404  297527 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:11.615424  297527 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:11.615508  297527 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:11.623640  297527 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:11.624128  297527 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-241270" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:11.624263  297527 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-241270" cluster setting kubeconfig missing "no-preload-241270" context setting]
	I1205 07:45:11.624583  297527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:11.627415  297527 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:11.637226  297527 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1205 07:45:11.637260  297527 kubeadm.go:602] duration metric: took 21.829958ms to restartPrimaryControlPlane
	I1205 07:45:11.637269  297527 kubeadm.go:403] duration metric: took 68.208908ms to StartCluster
	I1205 07:45:11.637303  297527 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:11.637380  297527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:11.638058  297527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:11.638302  297527 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:45:11.638649  297527 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:11.638713  297527 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:11.638807  297527 addons.go:70] Setting storage-provisioner=true in profile "no-preload-241270"
	I1205 07:45:11.638833  297527 addons.go:239] Setting addon storage-provisioner=true in "no-preload-241270"
	I1205 07:45:11.638861  297527 host.go:66] Checking if "no-preload-241270" exists ...
	I1205 07:45:11.639305  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:11.639576  297527 addons.go:70] Setting dashboard=true in profile "no-preload-241270"
	I1205 07:45:11.639602  297527 addons.go:239] Setting addon dashboard=true in "no-preload-241270"
	W1205 07:45:11.639635  297527 addons.go:248] addon dashboard should already be in state true
	I1205 07:45:11.639677  297527 host.go:66] Checking if "no-preload-241270" exists ...
	I1205 07:45:11.640150  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:11.640549  297527 addons.go:70] Setting default-storageclass=true in profile "no-preload-241270"
	I1205 07:45:11.640575  297527 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-241270"
	I1205 07:45:11.640869  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:11.649342  297527 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:11.650688  297527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:11.689326  297527 addons.go:239] Setting addon default-storageclass=true in "no-preload-241270"
	I1205 07:45:11.689364  297527 host.go:66] Checking if "no-preload-241270" exists ...
	I1205 07:45:11.689875  297527 cli_runner.go:164] Run: docker container inspect no-preload-241270 --format={{.State.Status}}
	I1205 07:45:11.698805  297527 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:45:11.699979  297527 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:11.699999  297527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:45:11.700063  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:11.712159  297527 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:45:11.713511  297527 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:45:11.715045  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:45:11.715073  297527 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:45:11.715148  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:11.738771  297527 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:11.738793  297527 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:45:11.738903  297527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241270
	I1205 07:45:11.747054  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:11.764636  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:11.774908  297527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/no-preload-241270/id_rsa Username:docker}
	I1205 07:45:11.871002  297527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:11.907242  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:11.918814  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:11.946069  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:45:11.946108  297527 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:45:11.974547  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:45:11.974583  297527 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:45:12.027329  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:45:12.027368  297527 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:45:12.046838  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:45:12.046863  297527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:45:12.060336  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:45:12.060358  297527 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:45:12.073679  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:45:12.073741  297527 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:45:12.087034  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:45:12.087059  297527 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:45:12.099975  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:45:12.100054  297527 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:45:12.114362  297527 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:12.114427  297527 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:45:12.127930  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:12.601802  297527 node_ready.go:35] waiting up to 6m0s for node "no-preload-241270" to be "Ready" ...
	W1205 07:45:12.602147  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.602175  297527 retry.go:31] will retry after 295.526925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:12.602228  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.602244  297527 retry.go:31] will retry after 231.271581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:12.602442  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.602452  297527 retry.go:31] will retry after 367.435779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.834027  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:12.894864  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.894897  297527 retry.go:31] will retry after 449.347881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.898199  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:12.959840  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.959876  297527 retry.go:31] will retry after 452.29892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:12.971054  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:13.033404  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.033440  297527 retry.go:31] will retry after 214.476448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.248531  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:13.318057  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.318107  297527 retry.go:31] will retry after 615.086934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.344449  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:13.413298  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:13.463803  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.463839  297527 retry.go:31] will retry after 764.399145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:13.495656  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.495690  297527 retry.go:31] will retry after 674.75543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.933867  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:13.996287  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:13.996321  297527 retry.go:31] will retry after 1.043054158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:14.171232  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:14.228748  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:14.233505  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:14.233562  297527 retry.go:31] will retry after 795.385246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:14.289269  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:14.289352  297527 retry.go:31] will retry after 521.72183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:14.603077  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:14.811650  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:14.890440  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:14.890471  297527 retry.go:31] will retry after 828.939302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.031427  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:15.041144  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:15.134214  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.134298  297527 retry.go:31] will retry after 876.155433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:15.136562  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.136633  297527 retry.go:31] will retry after 670.908058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.720664  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:15.781108  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.781139  297527 retry.go:31] will retry after 1.485883423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.808382  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:15.871887  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:15.871920  297527 retry.go:31] will retry after 1.15355264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:16.011375  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:16.072314  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:16.072358  297527 retry.go:31] will retry after 1.604980836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.025791  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:17.092840  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.092875  297527 retry.go:31] will retry after 1.786675103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:17.102459  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:17.267863  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:17.326258  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.326287  297527 retry.go:31] will retry after 3.666462279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.678556  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:17.740156  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:17.740193  297527 retry.go:31] will retry after 3.530979089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:18.880121  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:18.940467  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:18.940499  297527 retry.go:31] will retry after 5.225523951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:19.103146  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:20.993955  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:21.057263  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:21.057291  297527 retry.go:31] will retry after 6.169823914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:21.272261  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:21.329248  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:21.329292  297527 retry.go:31] will retry after 5.225895034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:21.602787  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:23.603166  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:24.166394  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:24.248312  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:24.248343  297527 retry.go:31] will retry after 7.037113237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:26.103157  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:26.555898  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:26.616440  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:26.616472  297527 retry.go:31] will retry after 4.350402654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.227883  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:27.290238  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.290274  297527 retry.go:31] will retry after 4.46337589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:28.602428  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:30.602600  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:30.967025  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:31.052948  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.052985  297527 retry.go:31] will retry after 7.944795354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.285879  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:31.386500  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.386531  297527 retry.go:31] will retry after 6.357223814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.754709  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:31.845913  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.845950  297527 retry.go:31] will retry after 12.860014736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.103254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:35.602346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:37.602757  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:37.744028  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.809224  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.809268  297527 retry.go:31] will retry after 8.525278844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.998921  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:39.069453  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.069501  297527 retry.go:31] will retry after 21.498999078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:40.102833  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:42.602360  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:44.706625  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:44.764830  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.764865  297527 retry.go:31] will retry after 17.369945393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:45.102956  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:46.334817  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:46.418483  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:46.418521  297527 retry.go:31] will retry after 23.303020683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:47.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:49.602799  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:52.102357  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:54.603289  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:57.103152  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:59.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:00.568740  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:00.647111  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:00.647143  297527 retry.go:31] will retry after 19.124891194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:01.602386  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:02.135738  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:02.196508  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:02.196541  297527 retry.go:31] will retry after 23.234297555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:06.103226  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:08.602282  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:09.722604  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:46:09.788810  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:09.788894  297527 retry.go:31] will retry after 37.030083188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:10.602342  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:13.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:15.602302  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:17.603239  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:19.772903  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:19.832639  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:19.832668  297527 retry.go:31] will retry after 32.800355392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:20.103191  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:22.602639  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:24.603138  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:25.431569  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:25.488990  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:25.489023  297527 retry.go:31] will retry after 28.819883279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:27.102333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:29.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:31.103394  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:33.602924  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:36.102426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:38.602828  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:40.603153  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:43.102740  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:45.103537  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:46.819177  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:46:46.909187  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:46.909286  297527 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:47.602297  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:49.602426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:51.603232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:52.633653  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:52.692000  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:52.692106  297527 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:54.102683  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:54.310076  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:54.372261  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:54.372370  297527 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:46:54.375412  297527 out.go:179] * Enabled addons: 
	I1205 07:46:54.378282  297527 addons.go:530] duration metric: took 1m42.739564939s for enable addons: enabled=[]
	W1205 07:46:56.102997  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:58.602448  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:00.602658  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:03.102385  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:05.102744  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:07.602359  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:09.603452  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:12.102433  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:14.102787  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:16.602616  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:19.102330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:21.102389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:23.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:25.602789  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:28.102396  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:30.102577  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:32.602389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:34.602704  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:37.102277  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:39.102399  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:41.103193  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:43.602434  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:45.602606  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:48.102422  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:50.103132  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:52.602329  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:55.103139  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:57.602889  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:00.102486  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:02.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:04.602418  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:07.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:09.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:11.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:14.102692  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:16.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:18.602693  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:20.603254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:23.103214  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:25.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:28.102336  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:30.103232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:32.602298  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:34.602332  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:36.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:38.602933  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:41.102495  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:43.102732  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:45.102947  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:47.602661  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:50.102346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:52.103160  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:54.602949  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:57.102570  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:59.102902  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:01.602955  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:04.102698  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:06.603033  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:09.102351  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:11.102705  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:13.103312  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:15.602762  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:17.602959  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:20.102423  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:22.102493  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:24.602330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:27.102316  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:29.602939  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:32.102320  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:34.103275  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:36.602677  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:39.102319  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:41.602800  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:44.102845  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:46.103222  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:48.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:51.102462  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:53.103333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:55.602712  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:58.102265  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:00.103169  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:02.602855  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:04.603343  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:07.102231  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:09.102419  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:11.102920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:13.602347  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:15.602727  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:17.602807  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:20.102540  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:22.602506  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:24.602962  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:27.102442  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:29.102897  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:31.602443  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:34.102266  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:36.102706  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:38.602248  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:40.603240  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:43.103156  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:45.105072  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:47.602920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:50.103331  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:52.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:55.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:57.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:59.602402  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:02.102411  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:04.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:07.103249  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:09.602341  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:11.602638  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:12.602298  297527 node_ready.go:38] duration metric: took 6m0.000452624s for node "no-preload-241270" to be "Ready" ...
	I1205 07:51:12.605551  297527 out.go:203] 
	W1205 07:51:12.608371  297527 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 07:51:12.608388  297527 out.go:285] * 
	* 
	W1205 07:51:12.610554  297527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:51:12.612665  297527 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-241270
helpers_test.go:243: (dbg) docker inspect no-preload-241270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	        "Created": "2025-12-05T07:34:52.488952391Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297658,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:45:04.977832919Z",
	            "FinishedAt": "2025-12-05T07:45:03.670727358Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hosts",
	        "LogPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896-json.log",
	        "Name": "/no-preload-241270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	                "LowerDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-241270",
	                "Source": "/var/lib/docker/volumes/no-preload-241270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241270",
	                "name.minikube.sigs.k8s.io": "no-preload-241270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a57e08b617e6c99db8e0606f807966baa2265951deec9d7f31b28b674772ba7",
	            "SandboxKey": "/var/run/docker/netns/6a57e08b617e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:5e:e9:4a:59:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "509cbc0434c71e77097af60a2b0ce9a4473551172a41d0f484ec4e134db3ab73",
	                    "EndpointID": "8aadf1070cfccbd0175d1614c4a1ee7cb617e6ca8ef7cab3c7e2ce89af3cf831",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241270",
	                        "419e4a267ba5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270: exit status 2 (323.788382ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241270 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-241270 logs -n 25: (1.69296676s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ stop    │ -p no-preload-241270 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p no-preload-241270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	│ stop    │ -p newest-cni-622440 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-622440 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:45:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:45:25.089760  299667 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:25.090022  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090052  299667 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:25.090069  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090384  299667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:45:25.090842  299667 out.go:368] Setting JSON to false
	I1205 07:45:25.091806  299667 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8872,"bootTime":1764911853,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:45:25.091916  299667 start.go:143] virtualization:  
	I1205 07:45:25.094988  299667 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:25.098817  299667 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:25.098909  299667 notify.go:221] Checking for updates...
	I1205 07:45:25.105041  299667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:25.108085  299667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:25.111075  299667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:45:25.114070  299667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:25.117093  299667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:25.120796  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:25.121387  299667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:25.146702  299667 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:25.146810  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.201970  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.192879595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.202086  299667 docker.go:319] overlay module found
	I1205 07:45:25.205420  299667 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:25.208200  299667 start.go:309] selected driver: docker
	I1205 07:45:25.208216  299667 start.go:927] validating driver "docker" against &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.208322  299667 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:25.209018  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.271889  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.262935561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.272253  299667 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:45:25.272290  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:25.272360  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:25.272408  299667 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.275549  299667 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:45:25.278335  299667 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:45:25.281398  299667 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:25.284371  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:25.284526  299667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:25.304420  299667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:25.304443  299667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:45:25.350688  299667 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:45:25.522612  299667 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:45:25.522872  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.522902  299667 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.522986  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:45:25.522997  299667 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.314µs
	I1205 07:45:25.523010  299667 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:45:25.523020  299667 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523050  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:45:25.523054  299667 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.177µs
	I1205 07:45:25.523060  299667 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523070  299667 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523108  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:45:25.523117  299667 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.906µs
	I1205 07:45:25.523123  299667 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523137  299667 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523144  299667 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:25.523164  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:45:25.523170  299667 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 37.867µs
	I1205 07:45:25.523176  299667 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523180  299667 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523184  299667 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523220  299667 start.go:364] duration metric: took 26.043µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:45:25.523232  299667 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:25.523223  299667 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523248  299667 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:45:25.523282  299667 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.595µs
	I1205 07:45:25.523288  299667 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:45:25.523289  299667 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523237  299667 fix.go:54] fixHost starting: 
	I1205 07:45:25.523319  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:45:25.523328  299667 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 144.182µs
	I1205 07:45:25.523335  299667 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523296  299667 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 85.228µs
	I1205 07:45:25.523346  299667 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:45:25.523368  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:45:25.523373  299667 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 85.498µs
	I1205 07:45:25.523378  299667 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:45:25.523390  299667 cache.go:87] Successfully saved all images to host disk.
	I1205 07:45:25.523585  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.542111  299667 fix.go:112] recreateIfNeeded on newest-cni-622440: state=Stopped err=<nil>
	W1205 07:45:25.542142  299667 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:45:26.103157  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:26.555898  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:26.616440  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:26.616472  297527 retry.go:31] will retry after 4.350402654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.227883  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:27.290238  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.290274  297527 retry.go:31] will retry after 4.46337589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:28.602428  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:25.545608  299667 out.go:252] * Restarting existing docker container for "newest-cni-622440" ...
	I1205 07:45:25.545717  299667 cli_runner.go:164] Run: docker start newest-cni-622440
	I1205 07:45:25.826053  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.856383  299667 kic.go:430] container "newest-cni-622440" state is running.
	I1205 07:45:25.856775  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:25.877321  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.877542  299667 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:25.878047  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:25.903226  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:25.903553  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:25.903561  299667 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:25.904107  299667 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35184->127.0.0.1:33103: read: connection reset by peer
	I1205 07:45:29.056730  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.056754  299667 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:45:29.056818  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.074923  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.075238  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.075256  299667 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:45:29.238817  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.238924  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.256394  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.256698  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.256720  299667 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:29.409360  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:29.409384  299667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:45:29.409403  299667 ubuntu.go:190] setting up certificates
	I1205 07:45:29.409412  299667 provision.go:84] configureAuth start
	I1205 07:45:29.409469  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:29.426522  299667 provision.go:143] copyHostCerts
	I1205 07:45:29.426598  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:45:29.426610  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:45:29.426695  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:45:29.426806  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:45:29.426817  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:45:29.426846  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:45:29.426910  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:45:29.426920  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:45:29.426946  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:45:29.427008  299667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:45:29.583992  299667 provision.go:177] copyRemoteCerts
	I1205 07:45:29.584079  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:29.584142  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.601241  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.705331  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:45:29.723929  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:45:29.741035  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:45:29.758654  299667 provision.go:87] duration metric: took 349.219709ms to configureAuth
	I1205 07:45:29.758682  299667 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:29.758882  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:29.758893  299667 machine.go:97] duration metric: took 3.881342431s to provisionDockerMachine
	I1205 07:45:29.758901  299667 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:45:29.758917  299667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:29.758966  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:29.759008  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.777016  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.881927  299667 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:29.889885  299667 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:29.889915  299667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:29.889927  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:45:29.889986  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:45:29.890075  299667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:45:29.890181  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:29.899716  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:29.920554  299667 start.go:296] duration metric: took 161.628343ms for postStartSetup
	I1205 07:45:29.920647  299667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:29.920717  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.938834  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.040045  299667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:30.045649  299667 fix.go:56] duration metric: took 4.522402293s for fixHost
	I1205 07:45:30.045683  299667 start.go:83] releasing machines lock for "newest-cni-622440", held for 4.522453444s
	I1205 07:45:30.045767  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:30.065623  299667 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:30.065678  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.065694  299667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:30.065761  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.087940  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.099183  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.281502  299667 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:30.288110  299667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:30.292481  299667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:30.292550  299667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:30.300562  299667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:30.300584  299667 start.go:496] detecting cgroup driver to use...
	I1205 07:45:30.300616  299667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:30.300666  299667 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:45:30.318364  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:45:30.332088  299667 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:30.332151  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:30.348258  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:30.361775  299667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:30.469361  299667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:30.577441  299667 docker.go:234] disabling docker service ...
	I1205 07:45:30.577508  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:30.592915  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:30.607578  299667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:30.752107  299667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:30.872747  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:30.888408  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:30.904134  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:45:30.914385  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:45:30.923315  299667 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:45:30.923423  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:45:30.932175  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.940943  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:45:30.949729  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.958228  299667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:30.965941  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:45:30.980042  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:45:30.995740  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:45:31.009747  299667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:31.019595  299667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:31.028525  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.153254  299667 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:45:31.252043  299667 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:45:31.252123  299667 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:45:31.255724  299667 start.go:564] Will wait 60s for crictl version
	I1205 07:45:31.255784  299667 ssh_runner.go:195] Run: which crictl
	I1205 07:45:31.259402  299667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:31.288033  299667 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:45:31.288102  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.310723  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.334839  299667 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:45:31.337671  299667 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:31.359874  299667 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:31.365663  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.387524  299667 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:45:31.390412  299667 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:31.390547  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:31.390648  299667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:31.429142  299667 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:45:31.429206  299667 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:31.429215  299667 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:45:31.429338  299667 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:31.429419  299667 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:45:31.463460  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:31.463487  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:31.463511  299667 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:45:31.463580  299667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:31.463714  299667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:31.463789  299667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:45:31.471606  299667 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:31.471702  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:31.480080  299667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:45:31.492950  299667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:45:31.505530  299667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:45:31.518323  299667 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:31.521961  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.531618  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.655593  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:31.673339  299667 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:45:31.673398  299667 certs.go:195] generating shared ca certs ...
	I1205 07:45:31.673427  299667 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:31.673592  299667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:45:31.673665  299667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:45:31.673695  299667 certs.go:257] generating profile certs ...
	I1205 07:45:31.673812  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:45:31.673907  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:45:31.673970  299667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:45:31.674103  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:45:31.674164  299667 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:31.674197  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:31.674246  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:45:31.674289  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:31.674341  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:31.674413  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:31.675038  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:31.699874  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:31.718981  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:31.739011  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:31.757897  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:45:31.776123  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:31.794286  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:31.815714  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:45:31.832875  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:31.851417  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:45:31.868401  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:45:31.885858  299667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:31.898468  299667 ssh_runner.go:195] Run: openssl version
	I1205 07:45:31.904594  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.911851  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:45:31.919124  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922684  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922758  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.963682  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:31.970739  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.977808  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:31.985046  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988699  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988790  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:32.029966  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:32.037736  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.045196  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:45:32.052663  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056573  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056689  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.097976  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:32.106452  299667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:32.110712  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:32.154012  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:32.194946  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:32.235499  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:32.276192  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:32.316778  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:32.357969  299667 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:32.358063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:32.358128  299667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:32.393923  299667 cri.go:89] found id: ""
	I1205 07:45:32.393993  299667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:32.401825  299667 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:32.401893  299667 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:32.401977  299667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:32.409190  299667 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:32.409869  299667 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.410186  299667 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-622440" cluster setting kubeconfig missing "newest-cni-622440" context setting]
	I1205 07:45:32.410754  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.412652  299667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:32.420082  299667 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1205 07:45:32.420112  299667 kubeadm.go:602] duration metric: took 18.200733ms to restartPrimaryControlPlane
	I1205 07:45:32.420122  299667 kubeadm.go:403] duration metric: took 62.162615ms to StartCluster
	I1205 07:45:32.420136  299667 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.420193  299667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.421089  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.421340  299667 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:45:32.421617  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:32.421690  299667 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:32.421796  299667 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-622440"
	I1205 07:45:32.421816  299667 addons.go:70] Setting default-storageclass=true in profile "newest-cni-622440"
	I1205 07:45:32.421860  299667 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-622440"
	I1205 07:45:32.421826  299667 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-622440"
	I1205 07:45:32.421949  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.422169  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.422375  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.421807  299667 addons.go:70] Setting dashboard=true in profile "newest-cni-622440"
	I1205 07:45:32.422859  299667 addons.go:239] Setting addon dashboard=true in "newest-cni-622440"
	W1205 07:45:32.422869  299667 addons.go:248] addon dashboard should already be in state true
	I1205 07:45:32.422895  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.423306  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.425911  299667 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:32.429270  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:32.459552  299667 addons.go:239] Setting addon default-storageclass=true in "newest-cni-622440"
	I1205 07:45:32.459590  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.459994  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.466676  299667 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:45:32.469573  299667 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:45:32.469693  299667 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.469710  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:45:32.469779  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.479022  299667 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:45:30.602600  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:30.967025  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:31.052948  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.052985  297527 retry.go:31] will retry after 7.944795354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.285879  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:31.386500  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.386531  297527 retry.go:31] will retry after 6.357223814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.754709  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:31.845913  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.845950  297527 retry.go:31] will retry after 12.860014736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.103254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:32.484603  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:45:32.484629  299667 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:45:32.484694  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.517396  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.529599  299667 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.529620  299667 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:45:32.529685  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.549325  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.574838  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.643911  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:32.670090  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.687313  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:45:32.687343  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:45:32.721498  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:45:32.721518  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:45:32.728026  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.759870  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:45:32.759892  299667 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:45:32.773100  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:45:32.773119  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:45:32.790813  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:45:32.790887  299667 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:45:32.806943  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:45:32.807008  299667 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:45:32.827525  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:45:32.827547  299667 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:45:32.840144  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:45:32.840166  299667 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:45:32.856122  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:32.856196  299667 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:45:32.869771  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:33.097468  299667 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:45:33.097593  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:33.097728  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097794  299667 retry.go:31] will retry after 241.658936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.097872  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097907  299667 retry.go:31] will retry after 176.603947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.098118  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.098157  299667 retry.go:31] will retry after 229.408257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.275635  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:33.328106  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.333654  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.333699  299667 retry.go:31] will retry after 493.072495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.339842  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:33.420976  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421140  299667 retry.go:31] will retry after 232.443098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.421103  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421275  299667 retry.go:31] will retry after 218.243264ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.598377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:33.640183  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:33.654611  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.714507  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.714586  299667 retry.go:31] will retry after 296.021108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.735889  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.735929  299667 retry.go:31] will retry after 647.569018ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.827334  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:33.912321  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.912410  299667 retry.go:31] will retry after 511.925432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.011792  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:34.070223  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.070270  299667 retry.go:31] will retry after 1.045041767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.098366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:34.384609  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:34.425097  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:34.456662  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.456771  299667 retry.go:31] will retry after 1.012360732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:34.490780  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.490815  299667 retry.go:31] will retry after 673.94662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.598028  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:35.602346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:37.602757  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:37.744028  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.809224  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.809268  297527 retry.go:31] will retry after 8.525278844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.998921  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:39.069453  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.069501  297527 retry.go:31] will retry after 21.498999078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.097803  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:35.115652  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:35.165241  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:35.189445  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.189528  299667 retry.go:31] will retry after 873.335351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:35.234071  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.234107  299667 retry.go:31] will retry after 1.250813401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.469343  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:35.535355  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.535386  299667 retry.go:31] will retry after 1.457971594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.598793  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.063166  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:36.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:36.141912  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.141992  299667 retry.go:31] will retry after 1.289648417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.485696  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:36.544841  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.544879  299667 retry.go:31] will retry after 2.662984572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.598226  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.993607  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.063691  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.063774  299667 retry.go:31] will retry after 1.151172803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.098032  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:37.431865  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:37.492142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.492177  299667 retry.go:31] will retry after 3.504601193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.598357  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.098363  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.215346  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:38.274274  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.274309  299667 retry.go:31] will retry after 1.757329115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.597749  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.097719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.208847  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:39.266142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.266182  299667 retry.go:31] will retry after 3.436463849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.598395  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.031973  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:40.102833  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:42.602360  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:44.706625  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:40.092374  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.092409  299667 retry.go:31] will retry after 2.182976597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.098469  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.598422  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.997583  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:41.059423  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.059455  299667 retry.go:31] will retry after 3.560419221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.098613  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:41.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.098453  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.276211  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:42.351488  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.351524  299667 retry.go:31] will retry after 9.602308898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.598167  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.703420  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:42.760290  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.760322  299667 retry.go:31] will retry after 5.381602643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:43.097810  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:43.597706  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.098335  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.597780  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.620405  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:44.677458  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.677489  299667 retry.go:31] will retry after 4.279612118s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:44.764830  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.764865  297527 retry.go:31] will retry after 17.369945393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:45.102956  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:46.334817  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:46.418483  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:46.418521  297527 retry.go:31] will retry after 23.303020683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:47.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:49.602799  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:45.098273  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:45.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.597868  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.097740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.597768  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.097748  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.142199  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:48.202751  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.202784  299667 retry.go:31] will retry after 9.130347643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.958075  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:49.020580  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.020664  299667 retry.go:31] will retry after 5.816091686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:49.597778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:52.102357  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:54.603289  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:50.097903  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:50.598277  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.098323  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.598320  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.954438  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:52.018482  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.018522  299667 retry.go:31] will retry after 11.887626777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.098608  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:52.598374  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.098377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.098330  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.597906  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.837992  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:54.928421  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:54.928451  299667 retry.go:31] will retry after 21.232814528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:57.103152  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:59.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:55.097998  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:55.598566  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.098233  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.598487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.333368  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:57.391373  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.391409  299667 retry.go:31] will retry after 6.534046571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.598447  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.098487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.597673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.098584  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.597752  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.568740  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:00.647111  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:00.647143  297527 retry.go:31] will retry after 19.124891194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:01.602386  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:02.135738  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:02.196508  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:02.196541  297527 retry.go:31] will retry after 23.234297555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:00.111473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.597738  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.097860  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.597786  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.598349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.097778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.906517  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:03.926085  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:03.977088  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:03.977126  299667 retry.go:31] will retry after 8.615984736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.014857  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.014953  299667 retry.go:31] will retry after 11.096851447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.098074  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:04.598727  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:06.103226  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:08.602282  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:09.722604  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:05.098302  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:05.598378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.098313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.098365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.597739  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.597740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.098581  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.598396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:09.788810  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:09.788894  297527 retry.go:31] will retry after 37.030083188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:10.602342  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:13.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:10.098145  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:10.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.097819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.598431  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.098421  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.593706  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:12.598498  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:12.687257  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:12.687290  299667 retry.go:31] will retry after 19.919210015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:13.098633  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:13.598345  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.097716  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.598398  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:15.602302  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:17.603239  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:15.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:15.112618  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:15.170666  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.170700  299667 retry.go:31] will retry after 26.586504873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.598228  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.161584  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:16.224162  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.224193  299667 retry.go:31] will retry after 29.423350117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.597722  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.097721  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.597743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.098656  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.598271  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.098404  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.598719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.772903  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:19.832639  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:19.832668  297527 retry.go:31] will retry after 32.800355392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:20.103191  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:22.602639  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:24.603138  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:20.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:20.597725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.097770  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.598319  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.097718  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.098368  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.598400  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.431569  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:25.488990  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:25.489023  297527 retry.go:31] will retry after 28.819883279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:27.102333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:29.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:25.098708  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.597766  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.098393  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.598238  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.098573  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.598365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.598524  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.097726  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.598366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:31.103394  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:33.602924  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:30.098021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:30.598337  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.098378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.097725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.597622  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:32.597702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:32.607176  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:32.654366  299667 cri.go:89] found id: ""
	I1205 07:46:32.654387  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.654395  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:32.654402  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:32.654460  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:46:32.707430  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707464  299667 retry.go:31] will retry after 35.686554771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707503  299667 cri.go:89] found id: ""
	I1205 07:46:32.707512  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.707519  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:32.707525  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:32.707583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:32.732319  299667 cri.go:89] found id: ""
	I1205 07:46:32.732341  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.732350  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:32.732356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:32.732414  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:32.756204  299667 cri.go:89] found id: ""
	I1205 07:46:32.756226  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.756235  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:32.756241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:32.756313  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:32.785401  299667 cri.go:89] found id: ""
	I1205 07:46:32.785423  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.785431  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:32.785437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:32.785493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:32.811348  299667 cri.go:89] found id: ""
	I1205 07:46:32.811373  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.811381  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:32.811388  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:32.811461  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:32.835578  299667 cri.go:89] found id: ""
	I1205 07:46:32.835603  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.835612  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:32.835618  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:32.835679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:32.861749  299667 cri.go:89] found id: ""
	I1205 07:46:32.861773  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.861781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:32.861790  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:32.861801  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:32.937533  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:32.937555  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:32.937568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:32.962127  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:32.962161  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:32.989223  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:32.989256  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:33.046092  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:33.046128  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:46:36.102426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:38.602828  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:35.559882  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:35.570602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:35.570679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:35.597322  299667 cri.go:89] found id: ""
	I1205 07:46:35.597348  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.597358  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:35.597364  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:35.597420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:35.631556  299667 cri.go:89] found id: ""
	I1205 07:46:35.631585  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.631594  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:35.631605  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:35.631670  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:35.666766  299667 cri.go:89] found id: ""
	I1205 07:46:35.666790  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.666808  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:35.666851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:35.666928  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:35.696469  299667 cri.go:89] found id: ""
	I1205 07:46:35.696494  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.696503  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:35.696510  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:35.696570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:35.721564  299667 cri.go:89] found id: ""
	I1205 07:46:35.721587  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.721613  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:35.721620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:35.721679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:35.750450  299667 cri.go:89] found id: ""
	I1205 07:46:35.750474  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.750483  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:35.750490  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:35.750577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:35.779075  299667 cri.go:89] found id: ""
	I1205 07:46:35.779097  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.779105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:35.779111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:35.779171  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:35.804778  299667 cri.go:89] found id: ""
	I1205 07:46:35.804849  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.804870  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:35.804891  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:35.804928  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:35.818664  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:35.818691  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:35.896985  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:35.897010  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:35.897023  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:35.922964  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:35.922997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:35.950985  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:35.951012  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.510773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:38.521214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:38.521283  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:38.547037  299667 cri.go:89] found id: ""
	I1205 07:46:38.547061  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.547069  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:38.547088  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:38.547152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:38.571870  299667 cri.go:89] found id: ""
	I1205 07:46:38.571894  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.571903  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:38.571909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:38.571967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:38.597667  299667 cri.go:89] found id: ""
	I1205 07:46:38.597693  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.597701  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:38.597707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:38.597781  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:38.634302  299667 cri.go:89] found id: ""
	I1205 07:46:38.634328  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.634336  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:38.634343  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:38.634411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:38.662787  299667 cri.go:89] found id: ""
	I1205 07:46:38.662813  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.662822  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:38.662829  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:38.662886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:38.688000  299667 cri.go:89] found id: ""
	I1205 07:46:38.688026  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.688034  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:38.688040  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:38.688108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:38.712589  299667 cri.go:89] found id: ""
	I1205 07:46:38.712611  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.712619  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:38.712631  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:38.712688  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:38.736469  299667 cri.go:89] found id: ""
	I1205 07:46:38.736490  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.736499  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:38.736507  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:38.736521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:38.763556  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:38.763586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.818344  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:38.818379  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:38.832020  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:38.832054  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:38.931143  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:38.931164  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:38.931178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:40.603153  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:43.102740  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:41.457376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:41.468655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:41.468729  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:41.496317  299667 cri.go:89] found id: ""
	I1205 07:46:41.496391  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.496415  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:41.496434  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:41.496520  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:41.522205  299667 cri.go:89] found id: ""
	I1205 07:46:41.522230  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.522238  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:41.522244  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:41.522304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:41.547643  299667 cri.go:89] found id: ""
	I1205 07:46:41.547668  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.547677  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:41.547684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:41.547743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:41.576000  299667 cri.go:89] found id: ""
	I1205 07:46:41.576024  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.576032  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:41.576039  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:41.576093  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:41.610347  299667 cri.go:89] found id: ""
	I1205 07:46:41.610373  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.610393  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:41.610399  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:41.610455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:41.641947  299667 cri.go:89] found id: ""
	I1205 07:46:41.641974  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.641983  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:41.641990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:41.642049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:41.680331  299667 cri.go:89] found id: ""
	I1205 07:46:41.680355  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.680363  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:41.680370  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:41.680426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:41.707279  299667 cri.go:89] found id: ""
	I1205 07:46:41.707301  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.707310  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:41.707319  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:41.707331  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:41.720629  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:41.720654  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 07:46:41.757919  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:41.789558  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:41.789582  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:41.789596  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:41.829441  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.829475  299667 retry.go:31] will retry after 23.380573162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.840285  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:41.840316  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:41.875962  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:41.875990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.439978  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:44.450947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:44.451025  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:44.476311  299667 cri.go:89] found id: ""
	I1205 07:46:44.476335  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.476344  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:44.476350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:44.476420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:44.501030  299667 cri.go:89] found id: ""
	I1205 07:46:44.501064  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.501073  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:44.501078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:44.501138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:44.525674  299667 cri.go:89] found id: ""
	I1205 07:46:44.525697  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.525705  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:44.525711  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:44.525769  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:44.554878  299667 cri.go:89] found id: ""
	I1205 07:46:44.554903  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.554911  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:44.554918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:44.554991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:44.579773  299667 cri.go:89] found id: ""
	I1205 07:46:44.579796  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.579805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:44.579811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:44.579867  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:44.611991  299667 cri.go:89] found id: ""
	I1205 07:46:44.612017  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.612042  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:44.612049  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:44.612108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:44.646395  299667 cri.go:89] found id: ""
	I1205 07:46:44.646418  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.646427  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:44.646433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:44.646499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:44.674148  299667 cri.go:89] found id: ""
	I1205 07:46:44.674170  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.674178  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:44.674187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:44.674199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.734427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:44.734469  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:44.748531  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:44.748561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:44.815565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:44.815586  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:44.815601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:44.841456  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:44.841492  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:46:45.103537  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:46.819177  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:46:46.909187  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:46.909286  297527 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:47.602297  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:49.602426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:45.648666  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:45.706769  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:45.706803  299667 retry.go:31] will retry after 32.901994647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:47.381509  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:47.392949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:47.393065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:47.424033  299667 cri.go:89] found id: ""
	I1205 07:46:47.424057  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.424066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:47.424072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:47.424140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:47.451239  299667 cri.go:89] found id: ""
	I1205 07:46:47.451265  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.451275  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:47.451282  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:47.451342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:47.475229  299667 cri.go:89] found id: ""
	I1205 07:46:47.475250  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.475259  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:47.475265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:47.475322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:47.500010  299667 cri.go:89] found id: ""
	I1205 07:46:47.500036  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.500045  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:47.500051  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:47.500110  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:47.525665  299667 cri.go:89] found id: ""
	I1205 07:46:47.525691  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.525700  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:47.525707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:47.525767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:47.550876  299667 cri.go:89] found id: ""
	I1205 07:46:47.550902  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.550911  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:47.550917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:47.550978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:47.574838  299667 cri.go:89] found id: ""
	I1205 07:46:47.574904  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.574926  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:47.574940  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:47.575018  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:47.606672  299667 cri.go:89] found id: ""
	I1205 07:46:47.606698  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.606707  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:47.606716  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:47.606728  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:47.644360  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:47.644388  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:47.706982  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:47.707019  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:47.720731  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:47.720759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:47.782357  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:47.782378  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:47.782393  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:51.603232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:52.633653  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:52.692000  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:52.692106  297527 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:54.102683  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:54.310076  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:54.372261  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:54.372370  297527 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:46:54.375412  297527 out.go:179] * Enabled addons: 
	I1205 07:46:54.378282  297527 addons.go:530] duration metric: took 1m42.739564939s for enable addons: enabled=[]
	I1205 07:46:50.307630  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:50.318086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:50.318159  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:50.342816  299667 cri.go:89] found id: ""
	I1205 07:46:50.342838  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.342847  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:50.342853  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:50.342921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:50.371375  299667 cri.go:89] found id: ""
	I1205 07:46:50.371440  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.371462  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:50.371478  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:50.371566  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:50.401098  299667 cri.go:89] found id: ""
	I1205 07:46:50.401206  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.401224  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:50.401245  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:50.401310  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:50.432101  299667 cri.go:89] found id: ""
	I1205 07:46:50.432134  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.432143  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:50.432149  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:50.432262  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:50.457371  299667 cri.go:89] found id: ""
	I1205 07:46:50.457396  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.457405  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:50.457413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:50.457469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:50.486796  299667 cri.go:89] found id: ""
	I1205 07:46:50.486821  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.486830  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:50.486836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:50.486945  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:50.515505  299667 cri.go:89] found id: ""
	I1205 07:46:50.515529  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.515537  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:50.515544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:50.515606  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:50.543462  299667 cri.go:89] found id: ""
	I1205 07:46:50.543486  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.543495  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:50.543503  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:50.543561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:50.600091  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:50.600276  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:50.619872  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:50.619944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:50.690141  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:50.690160  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:50.690173  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:50.715362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:50.715398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:53.244467  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:53.256174  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:53.256240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:53.279782  299667 cri.go:89] found id: ""
	I1205 07:46:53.279803  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.279810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:53.279817  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:53.279878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:53.303793  299667 cri.go:89] found id: ""
	I1205 07:46:53.303813  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.303821  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:53.303827  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:53.303884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:53.332886  299667 cri.go:89] found id: ""
	I1205 07:46:53.332908  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.332916  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:53.332922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:53.332981  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:53.359130  299667 cri.go:89] found id: ""
	I1205 07:46:53.359153  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.359161  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:53.359168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:53.359229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:53.384922  299667 cri.go:89] found id: ""
	I1205 07:46:53.384947  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.384966  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:53.384972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:53.385033  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:53.409882  299667 cri.go:89] found id: ""
	I1205 07:46:53.409903  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.409912  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:53.409918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:53.409982  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:53.435229  299667 cri.go:89] found id: ""
	I1205 07:46:53.435254  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.435263  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:53.435269  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:53.435326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:53.460378  299667 cri.go:89] found id: ""
	I1205 07:46:53.460402  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.460411  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:53.460419  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:53.460430  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:53.515653  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:53.515686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:53.529252  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:53.529277  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:53.590407  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:53.590427  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:53.590439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:53.615638  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:53.615670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:46:56.102997  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:58.602448  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:56.149491  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:56.160491  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:56.160560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:56.186032  299667 cri.go:89] found id: ""
	I1205 07:46:56.186055  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.186063  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:56.186069  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:56.186127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:56.210655  299667 cri.go:89] found id: ""
	I1205 07:46:56.210683  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.210691  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:56.210698  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:56.210760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:56.236968  299667 cri.go:89] found id: ""
	I1205 07:46:56.237039  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.237060  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:56.237078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:56.237197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:56.261470  299667 cri.go:89] found id: ""
	I1205 07:46:56.261543  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.261559  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:56.261567  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:56.261626  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:56.287544  299667 cri.go:89] found id: ""
	I1205 07:46:56.287569  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.287578  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:56.287586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:56.287664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:56.313083  299667 cri.go:89] found id: ""
	I1205 07:46:56.313154  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.313200  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:56.313222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:56.313290  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:56.338841  299667 cri.go:89] found id: ""
	I1205 07:46:56.338865  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.338879  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:56.338886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:56.338971  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:56.364821  299667 cri.go:89] found id: ""
	I1205 07:46:56.364883  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.364906  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:56.364927  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:56.364953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:56.421380  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:56.421412  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:56.434797  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:56.434825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:56.500557  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:56.500579  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:56.500592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:56.525423  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:56.525453  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.059925  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:59.070350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:59.070417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:59.106211  299667 cri.go:89] found id: ""
	I1205 07:46:59.106234  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.106242  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:59.106250  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:59.106308  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:59.134075  299667 cri.go:89] found id: ""
	I1205 07:46:59.134101  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.134110  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:59.134116  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:59.134173  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:59.163091  299667 cri.go:89] found id: ""
	I1205 07:46:59.163119  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.163128  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:59.163134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:59.163195  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:59.189283  299667 cri.go:89] found id: ""
	I1205 07:46:59.189308  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.189316  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:59.189323  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:59.189384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:59.214391  299667 cri.go:89] found id: ""
	I1205 07:46:59.214416  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.214433  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:59.214439  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:59.214498  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:59.246223  299667 cri.go:89] found id: ""
	I1205 07:46:59.246246  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.246255  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:59.246262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:59.246321  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:59.274955  299667 cri.go:89] found id: ""
	I1205 07:46:59.274991  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.274999  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:59.275006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:59.275074  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:59.302932  299667 cri.go:89] found id: ""
	I1205 07:46:59.302956  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.302965  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:59.302984  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:59.302997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:59.362548  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:59.362571  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:59.362583  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:59.387053  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:59.387085  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.413739  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:59.413767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:59.469532  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:59.469569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:00.602658  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:03.102385  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:01.983455  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:01.994190  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:01.994316  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:02.023883  299667 cri.go:89] found id: ""
	I1205 07:47:02.023913  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.023922  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:02.023929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:02.023992  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:02.050293  299667 cri.go:89] found id: ""
	I1205 07:47:02.050367  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.050383  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:02.050390  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:02.050458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:02.076131  299667 cri.go:89] found id: ""
	I1205 07:47:02.076157  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.076166  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:02.076172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:02.076235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:02.115590  299667 cri.go:89] found id: ""
	I1205 07:47:02.115623  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.115632  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:02.115638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:02.115733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:02.155255  299667 cri.go:89] found id: ""
	I1205 07:47:02.155281  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.155290  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:02.155297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:02.155355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:02.184142  299667 cri.go:89] found id: ""
	I1205 07:47:02.184169  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.184178  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:02.184185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:02.184244  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:02.208969  299667 cri.go:89] found id: ""
	I1205 07:47:02.208997  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.209006  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:02.209036  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:02.209126  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:02.233523  299667 cri.go:89] found id: ""
	I1205 07:47:02.233556  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.233565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:02.233597  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:02.233609  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:02.289818  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:02.289852  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:02.303686  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:02.303756  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:02.370663  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:02.370711  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:02.370723  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:02.395466  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:02.395508  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:04.925546  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:04.937771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:04.937866  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:04.967009  299667 cri.go:89] found id: ""
	I1205 07:47:04.967031  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.967039  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:04.967046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:04.967103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:04.998327  299667 cri.go:89] found id: ""
	I1205 07:47:04.998351  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.998360  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:04.998365  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:04.998426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:05.026478  299667 cri.go:89] found id: ""
	I1205 07:47:05.026505  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.026513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:05.026521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:05.026583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:05.051556  299667 cri.go:89] found id: ""
	I1205 07:47:05.051580  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.051588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:05.051595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:05.051658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:05.078546  299667 cri.go:89] found id: ""
	I1205 07:47:05.078570  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.078579  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:05.078585  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:05.078649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1205 07:47:05.102744  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:07.602359  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:09.603452  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:05.107928  299667 cri.go:89] found id: ""
	I1205 07:47:05.107955  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.107964  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:05.107971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:05.108035  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:05.134695  299667 cri.go:89] found id: ""
	I1205 07:47:05.134718  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.134727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:05.134733  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:05.134792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:05.160991  299667 cri.go:89] found id: ""
	I1205 07:47:05.161017  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.161025  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:05.161035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:05.161048  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:05.211053  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:47:05.219354  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:05.219426  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:05.274067  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:05.274165  299667 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:05.274831  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:05.274851  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:05.336443  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:05.336473  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:05.336486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:05.361343  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:05.361374  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:07.887800  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:07.899185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:07.899259  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:07.927401  299667 cri.go:89] found id: ""
	I1205 07:47:07.927423  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.927431  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:07.927437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:07.927511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:07.958986  299667 cri.go:89] found id: ""
	I1205 07:47:07.959008  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.959017  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:07.959023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:07.959081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:07.986953  299667 cri.go:89] found id: ""
	I1205 07:47:07.986974  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.986983  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:07.986989  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:07.987052  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:08.013548  299667 cri.go:89] found id: ""
	I1205 07:47:08.013573  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.013581  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:08.013590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:08.013654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:08.039626  299667 cri.go:89] found id: ""
	I1205 07:47:08.039650  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.039658  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:08.039664  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:08.039724  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:08.064448  299667 cri.go:89] found id: ""
	I1205 07:47:08.064472  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.064482  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:08.064489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:08.064548  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:08.089144  299667 cri.go:89] found id: ""
	I1205 07:47:08.089234  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.089250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:08.089257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:08.089325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:08.124837  299667 cri.go:89] found id: ""
	I1205 07:47:08.124863  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.124890  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:08.124900  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:08.124917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:08.155028  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:08.155055  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:08.215310  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:08.215346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:08.229549  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:08.229577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:08.292266  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:08.292296  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:08.292309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:08.394608  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:47:08.457975  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:08.458074  299667 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:47:12.102433  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:14.102787  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:10.816831  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:10.827471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:10.827537  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:10.856590  299667 cri.go:89] found id: ""
	I1205 07:47:10.856612  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.856621  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:10.856626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:10.856687  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:10.887186  299667 cri.go:89] found id: ""
	I1205 07:47:10.887207  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.887215  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:10.887221  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:10.887279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:10.914460  299667 cri.go:89] found id: ""
	I1205 07:47:10.914482  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.914490  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:10.914497  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:10.914554  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:10.943070  299667 cri.go:89] found id: ""
	I1205 07:47:10.943095  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.943103  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:10.943109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:10.943167  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:10.967007  299667 cri.go:89] found id: ""
	I1205 07:47:10.967034  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.967043  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:10.967050  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:10.967142  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:10.990367  299667 cri.go:89] found id: ""
	I1205 07:47:10.990394  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.990402  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:10.990408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:10.990465  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:11.021515  299667 cri.go:89] found id: ""
	I1205 07:47:11.021538  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.021547  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:11.021553  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:11.021616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:11.046137  299667 cri.go:89] found id: ""
	I1205 07:47:11.046159  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.046168  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:11.046176  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:11.046190  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:11.071756  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:11.071787  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:11.101757  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:11.101784  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:11.175924  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:11.175962  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:11.190392  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:11.190424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:11.252655  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:13.753819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:13.764287  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:13.764373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:13.790393  299667 cri.go:89] found id: ""
	I1205 07:47:13.790418  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.790426  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:13.790433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:13.790496  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:13.814911  299667 cri.go:89] found id: ""
	I1205 07:47:13.814935  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.814944  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:13.814951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:13.815007  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:13.839756  299667 cri.go:89] found id: ""
	I1205 07:47:13.839779  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.839787  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:13.839794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:13.839852  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:13.870908  299667 cri.go:89] found id: ""
	I1205 07:47:13.870933  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.870943  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:13.870949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:13.871010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:13.902182  299667 cri.go:89] found id: ""
	I1205 07:47:13.902208  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.902216  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:13.902223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:13.902281  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:13.928077  299667 cri.go:89] found id: ""
	I1205 07:47:13.928102  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.928111  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:13.928117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:13.928174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:13.952673  299667 cri.go:89] found id: ""
	I1205 07:47:13.952706  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.952715  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:13.952721  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:13.952786  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:13.982104  299667 cri.go:89] found id: ""
	I1205 07:47:13.982137  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.982147  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:13.982156  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:13.982168  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:14.047894  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:14.047925  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:14.061830  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:14.061861  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:14.145569  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:14.145587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:14.145601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:14.173369  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:14.173406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:16.701890  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:16.712471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:16.712541  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:16.737364  299667 cri.go:89] found id: ""
	I1205 07:47:16.737386  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.737394  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:16.737400  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:16.737458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:16.761826  299667 cri.go:89] found id: ""
	I1205 07:47:16.761849  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.761858  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:16.761864  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:16.761921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:16.787321  299667 cri.go:89] found id: ""
	I1205 07:47:16.787343  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.787352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:16.787359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:16.787419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:16.812059  299667 cri.go:89] found id: ""
	I1205 07:47:16.812080  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.812087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:16.812094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:16.812152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:16.835710  299667 cri.go:89] found id: ""
	I1205 07:47:16.835731  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.835739  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:16.835745  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:16.835804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:16.866817  299667 cri.go:89] found id: ""
	I1205 07:47:16.866839  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.866848  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:16.866854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:16.866915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:16.892855  299667 cri.go:89] found id: ""
	I1205 07:47:16.892877  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.892885  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:16.892891  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:16.892948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:16.921328  299667 cri.go:89] found id: ""
	I1205 07:47:16.921348  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.921356  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:16.921365  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:16.921378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:16.975810  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:16.975843  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:16.989559  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:16.989589  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:17.052011  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:17.052031  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:17.052044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:17.076823  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:17.076853  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:18.609402  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:47:18.686960  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:18.687059  299667 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:18.690290  299667 out.go:179] * Enabled addons: 
	W1205 07:47:16.602616  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:19.102330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:18.693172  299667 addons.go:530] duration metric: took 1m46.271465904s for enable addons: enabled=[]
	I1205 07:47:19.612423  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:19.623124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:19.623194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:19.651237  299667 cri.go:89] found id: ""
	I1205 07:47:19.651260  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.651268  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:19.651276  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:19.651338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:19.679760  299667 cri.go:89] found id: ""
	I1205 07:47:19.679781  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.679790  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:19.679795  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:19.679854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:19.703620  299667 cri.go:89] found id: ""
	I1205 07:47:19.703640  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.703652  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:19.703658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:19.703731  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:19.727543  299667 cri.go:89] found id: ""
	I1205 07:47:19.727607  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.727629  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:19.727645  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:19.727736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:19.751580  299667 cri.go:89] found id: ""
	I1205 07:47:19.751606  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.751614  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:19.751620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:19.751678  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:19.778033  299667 cri.go:89] found id: ""
	I1205 07:47:19.778058  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.778066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:19.778074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:19.778130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:19.805321  299667 cri.go:89] found id: ""
	I1205 07:47:19.805346  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.805354  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:19.805360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:19.805419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:19.828911  299667 cri.go:89] found id: ""
	I1205 07:47:19.828932  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.828940  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:19.828949  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:19.828961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:19.842046  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:19.842072  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:19.924477  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:19.924542  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:19.924568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:19.949241  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:19.949279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:19.977260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:19.977287  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:47:21.102389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:23.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:22.534572  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:22.545193  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:22.545272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:22.570057  299667 cri.go:89] found id: ""
	I1205 07:47:22.570083  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.570092  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:22.570098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:22.570163  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:22.595296  299667 cri.go:89] found id: ""
	I1205 07:47:22.595321  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.595330  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:22.595337  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:22.595421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:22.620283  299667 cri.go:89] found id: ""
	I1205 07:47:22.620307  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.620315  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:22.620322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:22.620399  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:22.644353  299667 cri.go:89] found id: ""
	I1205 07:47:22.644379  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.644389  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:22.644395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:22.644474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:22.674856  299667 cri.go:89] found id: ""
	I1205 07:47:22.674885  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.674894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:22.674900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:22.674980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:22.699975  299667 cri.go:89] found id: ""
	I1205 07:47:22.700002  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.700011  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:22.700018  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:22.700089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:22.725706  299667 cri.go:89] found id: ""
	I1205 07:47:22.725734  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.725743  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:22.725753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:22.725822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:22.750409  299667 cri.go:89] found id: ""
	I1205 07:47:22.750430  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.750439  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:22.750459  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:22.750471  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:22.775719  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:22.775754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:22.806148  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:22.806175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:22.863750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:22.863786  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:22.878145  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:22.878174  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:22.945284  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:47:25.602789  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:28.102396  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:25.446099  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:25.457267  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:25.457345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:25.484246  299667 cri.go:89] found id: ""
	I1205 07:47:25.484273  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.484282  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:25.484289  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:25.484346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:25.513783  299667 cri.go:89] found id: ""
	I1205 07:47:25.513806  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.513815  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:25.513821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:25.513895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:25.542603  299667 cri.go:89] found id: ""
	I1205 07:47:25.542627  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.542636  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:25.542642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:25.542768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:25.566393  299667 cri.go:89] found id: ""
	I1205 07:47:25.566417  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.566427  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:25.566433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:25.566510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:25.591113  299667 cri.go:89] found id: ""
	I1205 07:47:25.591148  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.591157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:25.591164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:25.591237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:25.619895  299667 cri.go:89] found id: ""
	I1205 07:47:25.619919  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.619928  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:25.619935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:25.619991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:25.645287  299667 cri.go:89] found id: ""
	I1205 07:47:25.645311  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.645319  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:25.645326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:25.645386  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:25.670944  299667 cri.go:89] found id: ""
	I1205 07:47:25.670967  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.670975  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:25.671025  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:25.671043  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:25.728687  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:25.728721  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:25.743347  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:25.743373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:25.808046  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:25.808069  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:25.808082  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:25.833265  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:25.833298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:28.366360  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:28.378460  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:28.378539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:28.413651  299667 cri.go:89] found id: ""
	I1205 07:47:28.413678  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.413687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:28.413694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:28.413755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:28.439196  299667 cri.go:89] found id: ""
	I1205 07:47:28.439223  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.439232  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:28.439238  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:28.439323  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:28.463516  299667 cri.go:89] found id: ""
	I1205 07:47:28.463587  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.463610  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:28.463628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:28.463709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:28.489425  299667 cri.go:89] found id: ""
	I1205 07:47:28.489450  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.489459  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:28.489467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:28.489560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:28.516772  299667 cri.go:89] found id: ""
	I1205 07:47:28.516797  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.516806  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:28.516812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:28.516872  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:28.543466  299667 cri.go:89] found id: ""
	I1205 07:47:28.543490  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.543498  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:28.543507  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:28.543564  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:28.568431  299667 cri.go:89] found id: ""
	I1205 07:47:28.568455  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.568463  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:28.568469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:28.568528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:28.593549  299667 cri.go:89] found id: ""
	I1205 07:47:28.593573  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.593581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:28.593590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:28.593601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:28.652330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:28.652364  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:28.665857  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:28.665882  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:28.733864  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:28.733886  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:28.733898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:28.758935  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:28.758971  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:30.102577  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:32.602389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:34.602704  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:31.286625  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:31.297007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:31.297075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:31.324486  299667 cri.go:89] found id: ""
	I1205 07:47:31.324508  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.324517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:31.324523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:31.324585  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:31.367211  299667 cri.go:89] found id: ""
	I1205 07:47:31.367234  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.367242  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:31.367249  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:31.367336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:31.398063  299667 cri.go:89] found id: ""
	I1205 07:47:31.398124  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.398148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:31.398166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:31.398239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:31.430255  299667 cri.go:89] found id: ""
	I1205 07:47:31.430280  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.430288  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:31.430303  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:31.430362  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:31.455188  299667 cri.go:89] found id: ""
	I1205 07:47:31.455213  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.455222  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:31.455228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:31.455304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:31.483709  299667 cri.go:89] found id: ""
	I1205 07:47:31.483734  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.483743  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:31.483754  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:31.483841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:31.511054  299667 cri.go:89] found id: ""
	I1205 07:47:31.511081  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.511090  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:31.511096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:31.511154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:31.536168  299667 cri.go:89] found id: ""
	I1205 07:47:31.536193  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.536202  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:31.536211  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:31.536222  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:31.592031  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:31.592066  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:31.606480  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:31.606506  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:31.673271  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:31.673294  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:31.673309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:31.699030  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:31.699063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:34.230473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:34.241086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:34.241182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:34.266354  299667 cri.go:89] found id: ""
	I1205 07:47:34.266377  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.266386  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:34.266393  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:34.266455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:34.295281  299667 cri.go:89] found id: ""
	I1205 07:47:34.295304  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.295313  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:34.295322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:34.295381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:34.320096  299667 cri.go:89] found id: ""
	I1205 07:47:34.320119  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.320127  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:34.320134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:34.320193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:34.351699  299667 cri.go:89] found id: ""
	I1205 07:47:34.351769  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.351778  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:34.351785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:34.351890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:34.384621  299667 cri.go:89] found id: ""
	I1205 07:47:34.384643  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.384651  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:34.384658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:34.384716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:34.416183  299667 cri.go:89] found id: ""
	I1205 07:47:34.416209  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.416217  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:34.416225  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:34.416303  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:34.442818  299667 cri.go:89] found id: ""
	I1205 07:47:34.442843  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.442852  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:34.442859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:34.442926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:34.467574  299667 cri.go:89] found id: ""
	I1205 07:47:34.467600  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.467608  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:34.467618  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:34.467630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:34.525566  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:34.525599  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:34.538971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:34.539003  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:34.603104  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:34.603123  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:34.603135  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:34.627990  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:34.628024  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:37.102277  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:39.102399  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:37.156741  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:37.168917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:37.168986  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:37.194896  299667 cri.go:89] found id: ""
	I1205 07:47:37.194920  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.194929  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:37.194935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:37.194996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:37.220279  299667 cri.go:89] found id: ""
	I1205 07:47:37.220316  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.220324  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:37.220331  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:37.220402  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:37.244728  299667 cri.go:89] found id: ""
	I1205 07:47:37.244759  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.244768  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:37.244774  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:37.244838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:37.269770  299667 cri.go:89] found id: ""
	I1205 07:47:37.269794  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.269802  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:37.269809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:37.269865  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:37.296343  299667 cri.go:89] found id: ""
	I1205 07:47:37.296367  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.296376  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:37.296382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:37.296444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:37.321553  299667 cri.go:89] found id: ""
	I1205 07:47:37.321576  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.321585  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:37.321592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:37.321651  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:37.356802  299667 cri.go:89] found id: ""
	I1205 07:47:37.356824  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.356834  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:37.356841  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:37.356901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:37.384475  299667 cri.go:89] found id: ""
	I1205 07:47:37.384497  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.384505  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:37.384513  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:37.384524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:37.451184  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:37.451220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:37.465508  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:37.465535  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:37.531461  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:37.531483  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:37.531495  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:37.556492  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:37.556531  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.084953  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:47:41.103193  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:43.602434  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:40.099166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:40.099240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:40.129037  299667 cri.go:89] found id: ""
	I1205 07:47:40.129058  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.129066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:40.129074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:40.129147  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:40.166711  299667 cri.go:89] found id: ""
	I1205 07:47:40.166735  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.166743  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:40.166752  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:40.166813  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:40.192959  299667 cri.go:89] found id: ""
	I1205 07:47:40.192982  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.192991  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:40.192998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:40.193056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:40.218168  299667 cri.go:89] found id: ""
	I1205 07:47:40.218193  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.218202  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:40.218208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:40.218292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:40.243397  299667 cri.go:89] found id: ""
	I1205 07:47:40.243420  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.243428  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:40.243435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:40.243510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:40.268685  299667 cri.go:89] found id: ""
	I1205 07:47:40.268710  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.268718  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:40.268725  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:40.268802  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:40.294417  299667 cri.go:89] found id: ""
	I1205 07:47:40.294443  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.294452  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:40.294480  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:40.294561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:40.321495  299667 cri.go:89] found id: ""
	I1205 07:47:40.321556  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.321570  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:40.321580  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:40.321592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.360106  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:40.360133  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:40.420594  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:40.420627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:40.437302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:40.437332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:40.503821  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:40.503843  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:40.503855  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.028974  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:43.039847  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:43.039922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:43.066179  299667 cri.go:89] found id: ""
	I1205 07:47:43.066202  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.066210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:43.066216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:43.066274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:43.092504  299667 cri.go:89] found id: ""
	I1205 07:47:43.092528  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.092536  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:43.092543  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:43.092610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:43.124060  299667 cri.go:89] found id: ""
	I1205 07:47:43.124086  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.124095  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:43.124102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:43.124166  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:43.154063  299667 cri.go:89] found id: ""
	I1205 07:47:43.154089  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.154098  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:43.154104  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:43.154174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:43.185231  299667 cri.go:89] found id: ""
	I1205 07:47:43.185255  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.185264  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:43.185271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:43.185334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:43.214039  299667 cri.go:89] found id: ""
	I1205 07:47:43.214113  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.214135  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:43.214153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:43.214239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:43.239645  299667 cri.go:89] found id: ""
	I1205 07:47:43.239709  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.239730  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:43.239747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:43.239836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:43.264373  299667 cri.go:89] found id: ""
	I1205 07:47:43.264437  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.264458  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:43.264478  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:43.264514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:43.320427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:43.320464  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:43.334556  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:43.334586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:43.419578  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:43.419600  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:43.419613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.444937  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:43.444974  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:45.602606  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:48.102422  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:45.973125  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:45.983741  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:45.983836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:46.021150  299667 cri.go:89] found id: ""
	I1205 07:47:46.021200  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.021208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:46.021215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:46.021296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:46.046658  299667 cri.go:89] found id: ""
	I1205 07:47:46.046688  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.046725  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:46.046732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:46.046806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:46.072039  299667 cri.go:89] found id: ""
	I1205 07:47:46.072113  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.072136  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:46.072153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:46.072239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:46.117323  299667 cri.go:89] found id: ""
	I1205 07:47:46.117399  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.117423  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:46.117448  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:46.117538  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:46.154886  299667 cri.go:89] found id: ""
	I1205 07:47:46.154912  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.154921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:46.154928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:46.155012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:46.181153  299667 cri.go:89] found id: ""
	I1205 07:47:46.181199  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.181208  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:46.181215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:46.181302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:46.211244  299667 cri.go:89] found id: ""
	I1205 07:47:46.211270  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.211279  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:46.211285  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:46.211346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:46.235089  299667 cri.go:89] found id: ""
	I1205 07:47:46.235164  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.235180  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:46.235191  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:46.235203  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:46.305530  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:46.305551  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:46.305563  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:46.330757  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:46.330792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:46.376750  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:46.376781  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:46.439507  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:46.439542  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:48.953904  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:48.964561  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:48.964628  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:48.987874  299667 cri.go:89] found id: ""
	I1205 07:47:48.987900  299667 logs.go:282] 0 containers: []
	W1205 07:47:48.987909  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:48.987916  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:48.987974  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:49.014890  299667 cri.go:89] found id: ""
	I1205 07:47:49.014966  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.014980  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:49.014988  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:49.015065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:49.040290  299667 cri.go:89] found id: ""
	I1205 07:47:49.040313  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.040321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:49.040328  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:49.040385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:49.065216  299667 cri.go:89] found id: ""
	I1205 07:47:49.065278  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.065287  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:49.065293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:49.065350  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:49.091916  299667 cri.go:89] found id: ""
	I1205 07:47:49.091941  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.091950  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:49.091956  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:49.092015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:49.122078  299667 cri.go:89] found id: ""
	I1205 07:47:49.122101  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.122110  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:49.122117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:49.122174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:49.148378  299667 cri.go:89] found id: ""
	I1205 07:47:49.148400  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.148409  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:49.148415  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:49.148474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:49.181597  299667 cri.go:89] found id: ""
	I1205 07:47:49.181623  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.181639  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:49.181649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:49.181660  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:49.237429  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:49.237462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:49.252514  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:49.252540  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:49.317886  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:49.317908  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:49.317922  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:49.343471  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:49.343503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:50.103132  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:52.602329  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:51.885282  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:51.895713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:51.895806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:51.923558  299667 cri.go:89] found id: ""
	I1205 07:47:51.923582  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.923592  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:51.923599  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:51.923702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:51.952466  299667 cri.go:89] found id: ""
	I1205 07:47:51.952490  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.952499  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:51.952506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:51.952594  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:51.977008  299667 cri.go:89] found id: ""
	I1205 07:47:51.977032  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.977041  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:51.977048  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:51.977130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:52.001855  299667 cri.go:89] found id: ""
	I1205 07:47:52.001880  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.001890  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:52.001918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:52.002010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:52.041299  299667 cri.go:89] found id: ""
	I1205 07:47:52.041367  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.041391  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:52.041410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:52.041490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:52.066425  299667 cri.go:89] found id: ""
	I1205 07:47:52.066448  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.066457  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:52.066484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:52.066567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:52.093389  299667 cri.go:89] found id: ""
	I1205 07:47:52.093415  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.093425  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:52.093431  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:52.093490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:52.131379  299667 cri.go:89] found id: ""
	I1205 07:47:52.131404  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.131412  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:52.131421  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:52.131432  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:52.172215  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:52.172246  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:52.232285  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:52.232317  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:52.246383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:52.246461  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:52.312938  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:52.312999  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:52.313037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:54.839218  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:54.849526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:54.849596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:54.878984  299667 cri.go:89] found id: ""
	I1205 07:47:54.879018  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.879028  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:54.879034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:54.879115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:54.903570  299667 cri.go:89] found id: ""
	I1205 07:47:54.903593  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.903603  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:54.903609  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:54.903668  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:54.928679  299667 cri.go:89] found id: ""
	I1205 07:47:54.928701  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.928710  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:54.928716  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:54.928772  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:54.957443  299667 cri.go:89] found id: ""
	I1205 07:47:54.957465  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.957474  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:54.957481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:54.957539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:54.981997  299667 cri.go:89] found id: ""
	I1205 07:47:54.982022  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.982031  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:54.982037  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:54.982097  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:55.019658  299667 cri.go:89] found id: ""
	I1205 07:47:55.019684  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.019694  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:55.019702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:55.019774  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:55.045945  299667 cri.go:89] found id: ""
	I1205 07:47:55.045968  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.045977  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:55.045982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:55.046047  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:55.070660  299667 cri.go:89] found id: ""
	I1205 07:47:55.070682  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.070691  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:55.070753  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:55.070772  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:55.103139  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:57.602889  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:55.155877  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:55.155904  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:55.155918  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:55.182506  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:55.182538  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:55.209519  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:55.209545  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:55.268283  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:55.268315  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:57.781956  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:57.792419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:57.792511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:57.816805  299667 cri.go:89] found id: ""
	I1205 07:47:57.816830  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.816839  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:57.816845  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:57.816907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:57.844943  299667 cri.go:89] found id: ""
	I1205 07:47:57.844967  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.844975  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:57.844982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:57.845041  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:57.869698  299667 cri.go:89] found id: ""
	I1205 07:47:57.869720  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.869728  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:57.869735  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:57.869792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:57.894855  299667 cri.go:89] found id: ""
	I1205 07:47:57.894881  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.894889  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:57.894896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:57.895015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:57.919181  299667 cri.go:89] found id: ""
	I1205 07:47:57.919207  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.919217  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:57.919223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:57.919284  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:57.947523  299667 cri.go:89] found id: ""
	I1205 07:47:57.947545  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.947553  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:57.947559  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:57.947617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:57.972190  299667 cri.go:89] found id: ""
	I1205 07:47:57.972212  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.972221  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:57.972227  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:57.972337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:57.995598  299667 cri.go:89] found id: ""
	I1205 07:47:57.995620  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.995628  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:57.995637  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:57.995648  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:58.053180  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:58.053214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:58.066958  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:58.067035  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:58.148853  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:58.148871  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:58.148884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:58.177078  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:58.177111  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:00.102486  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:02.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:04.602418  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:00.709764  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:00.720636  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:00.720709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:00.745332  299667 cri.go:89] found id: ""
	I1205 07:48:00.745357  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.745367  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:00.745377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:00.745446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:00.769743  299667 cri.go:89] found id: ""
	I1205 07:48:00.769766  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.769774  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:00.769780  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:00.769838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:00.793723  299667 cri.go:89] found id: ""
	I1205 07:48:00.793747  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.793755  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:00.793761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:00.793849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:00.822270  299667 cri.go:89] found id: ""
	I1205 07:48:00.822295  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.822304  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:00.822311  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:00.822372  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:00.846055  299667 cri.go:89] found id: ""
	I1205 07:48:00.846079  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.846088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:00.846094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:00.846154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:00.875896  299667 cri.go:89] found id: ""
	I1205 07:48:00.875927  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.875938  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:00.875945  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:00.876005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:00.901376  299667 cri.go:89] found id: ""
	I1205 07:48:00.901401  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.901410  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:00.901417  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:00.901478  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:00.931038  299667 cri.go:89] found id: ""
	I1205 07:48:00.931062  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.931070  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:00.931080  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:00.931121  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:00.997183  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:00.997205  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:00.997217  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:01.023514  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:01.023552  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:01.051665  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:01.051694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:01.112451  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:01.112528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:03.628641  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:03.640043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:03.640115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:03.668895  299667 cri.go:89] found id: ""
	I1205 07:48:03.668923  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.668932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:03.668939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:03.669005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:03.698851  299667 cri.go:89] found id: ""
	I1205 07:48:03.698873  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.698882  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:03.698888  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:03.698946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:03.724736  299667 cri.go:89] found id: ""
	I1205 07:48:03.724758  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.724767  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:03.724773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:03.724831  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:03.751007  299667 cri.go:89] found id: ""
	I1205 07:48:03.751030  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.751038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:03.751072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:03.751143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:03.779130  299667 cri.go:89] found id: ""
	I1205 07:48:03.779153  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.779162  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:03.779168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:03.779226  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:03.808717  299667 cri.go:89] found id: ""
	I1205 07:48:03.808738  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.808798  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:03.808812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:03.808893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:03.834648  299667 cri.go:89] found id: ""
	I1205 07:48:03.834745  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.834769  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:03.834790  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:03.834894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:03.860266  299667 cri.go:89] found id: ""
	I1205 07:48:03.860290  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.860298  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:03.860307  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:03.860326  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:03.925650  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:03.925672  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:03.925684  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:03.951836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:03.951866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:03.981147  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:03.981199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:04.037271  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:04.037308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:48:07.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:09.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:06.551820  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:06.562850  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:06.562922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:06.588022  299667 cri.go:89] found id: ""
	I1205 07:48:06.588044  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.588052  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:06.588059  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:06.588121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:06.618654  299667 cri.go:89] found id: ""
	I1205 07:48:06.618677  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.618687  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:06.618693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:06.618760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:06.654167  299667 cri.go:89] found id: ""
	I1205 07:48:06.654188  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.654197  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:06.654203  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:06.654261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:06.681234  299667 cri.go:89] found id: ""
	I1205 07:48:06.681306  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.681327  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:06.681345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:06.681437  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:06.705922  299667 cri.go:89] found id: ""
	I1205 07:48:06.705946  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.705955  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:06.705962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:06.706044  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:06.730881  299667 cri.go:89] found id: ""
	I1205 07:48:06.730913  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.730924  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:06.730930  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:06.730987  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:06.755636  299667 cri.go:89] found id: ""
	I1205 07:48:06.755661  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.755670  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:06.755676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:06.755743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:06.780702  299667 cri.go:89] found id: ""
	I1205 07:48:06.780735  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.780743  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:06.780753  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:06.780764  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:06.841265  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:06.841303  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:06.854661  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:06.854686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:06.918298  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:06.918316  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:06.918328  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:06.943239  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:06.943274  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.471658  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:09.482526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:09.482598  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:09.507658  299667 cri.go:89] found id: ""
	I1205 07:48:09.507683  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.507692  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:09.507699  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:09.507765  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:09.538688  299667 cri.go:89] found id: ""
	I1205 07:48:09.538744  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.538758  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:09.538765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:09.538835  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:09.564016  299667 cri.go:89] found id: ""
	I1205 07:48:09.564041  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.564050  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:09.564056  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:09.564118  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:09.595020  299667 cri.go:89] found id: ""
	I1205 07:48:09.595047  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.595056  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:09.595062  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:09.595170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:09.627725  299667 cri.go:89] found id: ""
	I1205 07:48:09.627747  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.627756  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:09.627763  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:09.627821  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:09.661208  299667 cri.go:89] found id: ""
	I1205 07:48:09.661273  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.661290  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:09.661297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:09.661371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:09.686173  299667 cri.go:89] found id: ""
	I1205 07:48:09.686207  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.686216  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:09.686223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:09.686291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:09.710385  299667 cri.go:89] found id: ""
	I1205 07:48:09.710417  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.710426  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:09.710435  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:09.710447  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:09.724065  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:09.724089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:09.786352  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:09.786371  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:09.786383  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:09.814782  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:09.814823  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.845678  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:09.845705  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:11.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:14.102692  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:12.403586  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:12.414137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:12.414208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:12.443644  299667 cri.go:89] found id: ""
	I1205 07:48:12.443666  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.443677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:12.443683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:12.443743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:12.468970  299667 cri.go:89] found id: ""
	I1205 07:48:12.468992  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.469001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:12.469007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:12.469073  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:12.495420  299667 cri.go:89] found id: ""
	I1205 07:48:12.495441  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.495449  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:12.495455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:12.495513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:12.520821  299667 cri.go:89] found id: ""
	I1205 07:48:12.520848  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.520857  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:12.520862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:12.520920  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:12.546738  299667 cri.go:89] found id: ""
	I1205 07:48:12.546767  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.546776  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:12.546782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:12.546845  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:12.571663  299667 cri.go:89] found id: ""
	I1205 07:48:12.571687  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.571696  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:12.571702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:12.571759  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:12.600237  299667 cri.go:89] found id: ""
	I1205 07:48:12.600263  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.600272  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:12.600279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:12.600336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:12.645073  299667 cri.go:89] found id: ""
	I1205 07:48:12.645108  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.645116  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:12.645126  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:12.645137  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:12.661987  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:12.662020  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:12.726418  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:12.726442  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:12.726455  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:12.751208  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:12.751243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:12.780690  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:12.780718  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:16.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:18.602693  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:15.336959  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:15.349150  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:15.349233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:15.379055  299667 cri.go:89] found id: ""
	I1205 07:48:15.379075  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.379084  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:15.379090  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:15.379148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:15.411812  299667 cri.go:89] found id: ""
	I1205 07:48:15.411832  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.411841  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:15.411849  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:15.411907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:15.436056  299667 cri.go:89] found id: ""
	I1205 07:48:15.436077  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.436085  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:15.436091  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:15.436152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:15.461323  299667 cri.go:89] found id: ""
	I1205 07:48:15.461345  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.461354  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:15.461360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:15.461416  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:15.490552  299667 cri.go:89] found id: ""
	I1205 07:48:15.490577  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.490586  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:15.490593  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:15.490682  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:15.519448  299667 cri.go:89] found id: ""
	I1205 07:48:15.519471  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.519480  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:15.519487  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:15.519544  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:15.548923  299667 cri.go:89] found id: ""
	I1205 07:48:15.548947  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.548956  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:15.548962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:15.549024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:15.574804  299667 cri.go:89] found id: ""
	I1205 07:48:15.574828  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.574839  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:15.574847  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:15.574878  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:15.634392  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:15.634428  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:15.651971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:15.651998  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:15.719384  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:15.719407  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:15.719418  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:15.743909  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:15.743941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.273819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:18.284902  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:18.284975  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:18.310770  299667 cri.go:89] found id: ""
	I1205 07:48:18.310793  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.310802  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:18.310809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:18.310868  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:18.335509  299667 cri.go:89] found id: ""
	I1205 07:48:18.335530  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.335538  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:18.335544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:18.335602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:18.367849  299667 cri.go:89] found id: ""
	I1205 07:48:18.367875  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.367884  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:18.367890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:18.367947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:18.397008  299667 cri.go:89] found id: ""
	I1205 07:48:18.397037  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.397046  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:18.397053  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:18.397115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:18.422994  299667 cri.go:89] found id: ""
	I1205 07:48:18.423017  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.423035  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:18.423043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:18.423109  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:18.447590  299667 cri.go:89] found id: ""
	I1205 07:48:18.447666  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.447689  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:18.447713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:18.447801  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:18.472279  299667 cri.go:89] found id: ""
	I1205 07:48:18.472353  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.472375  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:18.472392  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:18.472477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:18.497432  299667 cri.go:89] found id: ""
	I1205 07:48:18.497454  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.497463  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:18.497471  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:18.497484  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:18.522163  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:18.522196  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.550354  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:18.550378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:18.605871  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:18.605944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:18.623406  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:18.623435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:18.692830  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:20.603254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:23.103214  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:21.193117  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:21.203367  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:21.203430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:21.228233  299667 cri.go:89] found id: ""
	I1205 07:48:21.228257  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.228265  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:21.228272  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:21.228331  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:21.256427  299667 cri.go:89] found id: ""
	I1205 07:48:21.256448  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.256456  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:21.256462  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:21.256523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:21.281113  299667 cri.go:89] found id: ""
	I1205 07:48:21.281136  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.281145  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:21.281151  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:21.281238  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:21.305777  299667 cri.go:89] found id: ""
	I1205 07:48:21.305798  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.305806  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:21.305812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:21.305869  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:21.335558  299667 cri.go:89] found id: ""
	I1205 07:48:21.335622  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.335645  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:21.335662  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:21.335745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:21.374161  299667 cri.go:89] found id: ""
	I1205 07:48:21.374230  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.374257  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:21.374275  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:21.374358  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:21.403378  299667 cri.go:89] found id: ""
	I1205 07:48:21.403442  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.403464  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:21.403481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:21.403561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:21.428681  299667 cri.go:89] found id: ""
	I1205 07:48:21.428707  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.428717  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:21.428725  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:21.428736  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:21.485472  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:21.485503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:21.499440  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:21.499521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:21.564057  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:21.564088  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:21.564102  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:21.588591  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:21.588627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.133263  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:24.145210  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:24.145292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:24.172487  299667 cri.go:89] found id: ""
	I1205 07:48:24.172509  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.172517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:24.172523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:24.172582  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:24.197589  299667 cri.go:89] found id: ""
	I1205 07:48:24.197612  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.197634  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:24.197641  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:24.197727  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:24.232698  299667 cri.go:89] found id: ""
	I1205 07:48:24.232773  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.232803  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:24.232821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:24.232927  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:24.261831  299667 cri.go:89] found id: ""
	I1205 07:48:24.261854  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.261863  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:24.261870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:24.261932  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:24.290390  299667 cri.go:89] found id: ""
	I1205 07:48:24.290412  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.290420  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:24.290426  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:24.290486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:24.314257  299667 cri.go:89] found id: ""
	I1205 07:48:24.314327  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.314360  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:24.314383  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:24.314475  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:24.338446  299667 cri.go:89] found id: ""
	I1205 07:48:24.338469  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.338477  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:24.338484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:24.338542  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:24.366265  299667 cri.go:89] found id: ""
	I1205 07:48:24.366302  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.366314  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:24.366323  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:24.366335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:24.398722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:24.398759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.430842  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:24.430872  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:24.486913  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:24.486947  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:24.500309  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:24.500333  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:24.571107  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:25.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:28.102336  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:27.072799  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:27.082983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:27.083049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:27.106973  299667 cri.go:89] found id: ""
	I1205 07:48:27.106997  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.107005  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:27.107012  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:27.107072  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:27.131580  299667 cri.go:89] found id: ""
	I1205 07:48:27.131604  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.131613  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:27.131619  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:27.131679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:27.156330  299667 cri.go:89] found id: ""
	I1205 07:48:27.156356  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.156364  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:27.156371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:27.156434  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:27.180350  299667 cri.go:89] found id: ""
	I1205 07:48:27.180375  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.180384  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:27.180391  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:27.180449  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:27.204756  299667 cri.go:89] found id: ""
	I1205 07:48:27.204779  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.204787  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:27.204800  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:27.204858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:27.232181  299667 cri.go:89] found id: ""
	I1205 07:48:27.232207  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.232216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:27.232223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:27.232299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:27.258059  299667 cri.go:89] found id: ""
	I1205 07:48:27.258086  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.258095  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:27.258102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:27.258165  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:27.281695  299667 cri.go:89] found id: ""
	I1205 07:48:27.281717  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.281725  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:27.281734  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:27.281746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:27.294855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:27.294880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:27.362846  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:27.362868  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:27.362880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:27.389761  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:27.389791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:27.422138  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:27.422165  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:29.980506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:29.990724  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:29.990791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:30.035211  299667 cri.go:89] found id: ""
	I1205 07:48:30.035238  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.035248  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:30.035256  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:30.035326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:30.063908  299667 cri.go:89] found id: ""
	I1205 07:48:30.063944  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.063953  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:30.063960  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:30.064034  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	W1205 07:48:30.103232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:32.602298  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:34.602332  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:30.095785  299667 cri.go:89] found id: ""
	I1205 07:48:30.095860  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.095883  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:30.095908  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:30.096002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:30.123133  299667 cri.go:89] found id: ""
	I1205 07:48:30.123156  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.123166  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:30.123172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:30.123235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:30.149862  299667 cri.go:89] found id: ""
	I1205 07:48:30.149885  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.149894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:30.149901  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:30.150013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:30.175817  299667 cri.go:89] found id: ""
	I1205 07:48:30.175883  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.175903  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:30.175920  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:30.176005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:30.201607  299667 cri.go:89] found id: ""
	I1205 07:48:30.201631  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.201640  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:30.201646  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:30.201711  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:30.227899  299667 cri.go:89] found id: ""
	I1205 07:48:30.227922  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.227931  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:30.227940  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:30.227952  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:30.241708  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:30.241742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:30.309566  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:30.309584  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:30.309597  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:30.334740  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:30.334771  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:30.378494  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:30.378524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:32.939968  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:32.950759  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:32.950832  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:32.978406  299667 cri.go:89] found id: ""
	I1205 07:48:32.978430  299667 logs.go:282] 0 containers: []
	W1205 07:48:32.978438  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:32.978454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:32.978513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:33.008532  299667 cri.go:89] found id: ""
	I1205 07:48:33.008559  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.008568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:33.008574  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:33.008650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:33.033972  299667 cri.go:89] found id: ""
	I1205 07:48:33.033997  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.034005  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:33.034013  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:33.034081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:33.059992  299667 cri.go:89] found id: ""
	I1205 07:48:33.060014  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.060023  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:33.060029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:33.060094  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:33.090354  299667 cri.go:89] found id: ""
	I1205 07:48:33.090379  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.090387  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:33.090395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:33.090454  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:33.114706  299667 cri.go:89] found id: ""
	I1205 07:48:33.114735  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.114744  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:33.114751  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:33.114809  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:33.140456  299667 cri.go:89] found id: ""
	I1205 07:48:33.140481  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.140490  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:33.140496  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:33.140557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:33.169438  299667 cri.go:89] found id: ""
	I1205 07:48:33.169461  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.169469  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:33.169478  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:33.169490  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:33.195155  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:33.195189  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:33.221590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:33.221617  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:33.277078  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:33.277110  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:33.290419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:33.290445  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:33.357621  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:36.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:38.602933  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:35.857840  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:35.869455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:35.869525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:35.904563  299667 cri.go:89] found id: ""
	I1205 07:48:35.904585  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.904594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:35.904601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:35.904664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:35.932592  299667 cri.go:89] found id: ""
	I1205 07:48:35.932613  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.932622  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:35.932628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:35.932690  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:35.961011  299667 cri.go:89] found id: ""
	I1205 07:48:35.961033  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.961048  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:35.961055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:35.961121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:35.988109  299667 cri.go:89] found id: ""
	I1205 07:48:35.988131  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.988139  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:35.988146  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:35.988212  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:36.021866  299667 cri.go:89] found id: ""
	I1205 07:48:36.021894  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.021903  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:36.021910  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:36.021980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:36.053675  299667 cri.go:89] found id: ""
	I1205 07:48:36.053697  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.053706  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:36.053713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:36.053773  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:36.088227  299667 cri.go:89] found id: ""
	I1205 07:48:36.088252  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.088261  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:36.088268  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:36.088330  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:36.114723  299667 cri.go:89] found id: ""
	I1205 07:48:36.114753  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.114762  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:36.114772  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:36.114792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:36.130077  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:36.130105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:36.199710  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:36.199733  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:36.199746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:36.224920  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:36.224953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:36.260346  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:36.260373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:38.818746  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:38.829029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:38.829103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:38.861723  299667 cri.go:89] found id: ""
	I1205 07:48:38.861746  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.861755  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:38.861761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:38.861827  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:38.889749  299667 cri.go:89] found id: ""
	I1205 07:48:38.889772  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.889781  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:38.889787  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:38.889849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:38.925308  299667 cri.go:89] found id: ""
	I1205 07:48:38.925337  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.925346  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:38.925352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:38.925412  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:38.955710  299667 cri.go:89] found id: ""
	I1205 07:48:38.955732  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.955740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:38.955746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:38.955803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:38.980907  299667 cri.go:89] found id: ""
	I1205 07:48:38.980934  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.980943  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:38.980951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:38.981013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:39.011368  299667 cri.go:89] found id: ""
	I1205 07:48:39.011398  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.011409  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:39.011416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:39.011489  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:39.037693  299667 cri.go:89] found id: ""
	I1205 07:48:39.037719  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.037727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:39.037734  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:39.037806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:39.063915  299667 cri.go:89] found id: ""
	I1205 07:48:39.063940  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.063949  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:39.063957  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:39.063969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:39.120923  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:39.120960  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:39.134276  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:39.134302  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:39.194044  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:39.194064  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:39.194076  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:39.218536  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:39.218569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:41.102495  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:43.102732  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:41.747231  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:41.758180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:41.758258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:41.785400  299667 cri.go:89] found id: ""
	I1205 07:48:41.785426  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.785435  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:41.785442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:41.785509  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:41.817641  299667 cri.go:89] found id: ""
	I1205 07:48:41.817667  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.817676  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:41.817683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:41.817747  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:41.842820  299667 cri.go:89] found id: ""
	I1205 07:48:41.842846  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.842855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:41.842869  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:41.842933  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:41.880166  299667 cri.go:89] found id: ""
	I1205 07:48:41.880194  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.880208  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:41.880214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:41.880291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:41.911193  299667 cri.go:89] found id: ""
	I1205 07:48:41.911258  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.911273  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:41.911281  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:41.911337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:41.935720  299667 cri.go:89] found id: ""
	I1205 07:48:41.935745  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.935754  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:41.935761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:41.935823  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:41.962907  299667 cri.go:89] found id: ""
	I1205 07:48:41.962976  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.962992  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:41.962998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:41.963065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:41.991087  299667 cri.go:89] found id: ""
	I1205 07:48:41.991113  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.991121  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:41.991130  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:41.991140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:42.070025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:42.070073  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:42.086499  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:42.086528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:42.164053  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:42.164130  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:42.164162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:42.192298  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:42.192342  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:44.734604  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:44.745356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:44.745423  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:44.770206  299667 cri.go:89] found id: ""
	I1205 07:48:44.770230  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.770239  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:44.770247  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:44.770305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:44.796086  299667 cri.go:89] found id: ""
	I1205 07:48:44.796109  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.796118  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:44.796124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:44.796182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:44.822053  299667 cri.go:89] found id: ""
	I1205 07:48:44.822125  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.822148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:44.822167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:44.822258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:44.855227  299667 cri.go:89] found id: ""
	I1205 07:48:44.855298  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.855320  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:44.855339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:44.855422  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:44.884787  299667 cri.go:89] found id: ""
	I1205 07:48:44.884859  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.885835  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:44.885875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:44.885967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:44.922015  299667 cri.go:89] found id: ""
	I1205 07:48:44.922040  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.922048  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:44.922055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:44.922120  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:44.946942  299667 cri.go:89] found id: ""
	I1205 07:48:44.946979  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.946988  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:44.946995  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:44.947056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:44.972229  299667 cri.go:89] found id: ""
	I1205 07:48:44.972253  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.972262  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:44.972270  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:44.972280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:44.997401  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:44.997434  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:45.054576  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:45.054602  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:45.102947  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:47.602661  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:45.133742  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:45.133782  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:45.155399  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:45.155496  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:45.257582  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:47.759254  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:47.770034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:47.770107  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:47.799850  299667 cri.go:89] found id: ""
	I1205 07:48:47.799873  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.799882  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:47.799889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:47.799947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:47.824989  299667 cri.go:89] found id: ""
	I1205 07:48:47.825014  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.825022  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:47.825028  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:47.825089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:47.857967  299667 cri.go:89] found id: ""
	I1205 07:48:47.857993  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.858002  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:47.858008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:47.858065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:47.890800  299667 cri.go:89] found id: ""
	I1205 07:48:47.890833  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.890842  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:47.890851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:47.890911  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:47.921850  299667 cri.go:89] found id: ""
	I1205 07:48:47.921874  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.921883  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:47.921890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:47.921950  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:47.946404  299667 cri.go:89] found id: ""
	I1205 07:48:47.946426  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.946435  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:47.946442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:47.946501  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:47.972095  299667 cri.go:89] found id: ""
	I1205 07:48:47.972117  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.972125  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:47.972131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:47.972189  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:47.996555  299667 cri.go:89] found id: ""
	I1205 07:48:47.996577  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.996585  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:47.996594  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:47.996605  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:48.054087  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:48.054122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:48.069006  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:48.069038  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:48.132946  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:48.132968  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:48.132981  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:48.158949  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:48.158986  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:50.102346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:52.103160  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:54.602949  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:50.687838  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:50.698642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:50.698712  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:50.725092  299667 cri.go:89] found id: ""
	I1205 07:48:50.725113  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.725121  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:50.725128  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:50.725208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:50.750131  299667 cri.go:89] found id: ""
	I1205 07:48:50.750153  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.750161  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:50.750167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:50.750233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:50.774733  299667 cri.go:89] found id: ""
	I1205 07:48:50.774755  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.774765  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:50.774773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:50.774858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:50.803492  299667 cri.go:89] found id: ""
	I1205 07:48:50.803514  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.803524  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:50.803531  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:50.803596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:50.828915  299667 cri.go:89] found id: ""
	I1205 07:48:50.828938  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.828947  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:50.828953  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:50.829022  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:50.862065  299667 cri.go:89] found id: ""
	I1205 07:48:50.862090  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.862098  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:50.862105  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:50.862168  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:50.888327  299667 cri.go:89] found id: ""
	I1205 07:48:50.888356  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.888365  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:50.888371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:50.888432  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:50.917551  299667 cri.go:89] found id: ""
	I1205 07:48:50.917583  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.917592  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:50.917601  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:50.917613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:50.976691  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:50.976725  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:50.990259  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:50.990285  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:51.057592  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:51.057614  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:51.057628  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:51.088874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:51.088916  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.619589  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:53.630457  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:53.630521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:53.662396  299667 cri.go:89] found id: ""
	I1205 07:48:53.662420  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.662429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:53.662435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:53.662493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:53.687365  299667 cri.go:89] found id: ""
	I1205 07:48:53.687393  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.687402  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:53.687408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:53.687469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:53.711757  299667 cri.go:89] found id: ""
	I1205 07:48:53.711782  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.711791  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:53.711798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:53.711893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:53.735695  299667 cri.go:89] found id: ""
	I1205 07:48:53.735721  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.735730  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:53.735736  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:53.735793  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:53.763008  299667 cri.go:89] found id: ""
	I1205 07:48:53.763032  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.763041  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:53.763047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:53.763104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:53.791424  299667 cri.go:89] found id: ""
	I1205 07:48:53.791498  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.791520  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:53.791537  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:53.791617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:53.815855  299667 cri.go:89] found id: ""
	I1205 07:48:53.815876  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.815884  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:53.815890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:53.815946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:53.839524  299667 cri.go:89] found id: ""
	I1205 07:48:53.839548  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.839557  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:53.839565  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:53.839577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.884515  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:53.884591  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:53.947646  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:53.947682  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:53.961152  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:53.961211  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:54.031297  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:54.031321  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:54.031335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:48:57.102570  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:59.102902  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:56.557021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:56.567576  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:56.567694  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:56.596257  299667 cri.go:89] found id: ""
	I1205 07:48:56.596291  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.596300  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:56.596306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:56.596381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:56.627549  299667 cri.go:89] found id: ""
	I1205 07:48:56.627575  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.627583  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:56.627590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:56.627649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:56.661291  299667 cri.go:89] found id: ""
	I1205 07:48:56.661313  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.661321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:56.661332  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:56.661391  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:56.687435  299667 cri.go:89] found id: ""
	I1205 07:48:56.687462  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.687471  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:56.687477  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:56.687540  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:56.712238  299667 cri.go:89] found id: ""
	I1205 07:48:56.712261  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.712271  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:56.712277  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:56.712340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:56.736638  299667 cri.go:89] found id: ""
	I1205 07:48:56.736663  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.736672  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:56.736690  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:56.736748  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:56.760967  299667 cri.go:89] found id: ""
	I1205 07:48:56.761001  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.761010  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:56.761016  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:56.761075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:56.784912  299667 cri.go:89] found id: ""
	I1205 07:48:56.784939  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.784947  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:56.784958  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:56.784969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:56.808701  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:56.808734  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:56.835856  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:56.835884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:56.896082  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:56.896154  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:56.914235  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:56.914310  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:56.981742  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.483411  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:59.494080  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:59.494149  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:59.521983  299667 cri.go:89] found id: ""
	I1205 07:48:59.522007  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.522015  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:59.522023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:59.522081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:59.547605  299667 cri.go:89] found id: ""
	I1205 07:48:59.547637  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.547646  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:59.547652  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:59.547718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:59.572816  299667 cri.go:89] found id: ""
	I1205 07:48:59.572839  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.572847  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:59.572854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:59.572909  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:59.598049  299667 cri.go:89] found id: ""
	I1205 07:48:59.598070  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.598078  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:59.598085  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:59.598145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:59.624907  299667 cri.go:89] found id: ""
	I1205 07:48:59.624928  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.624937  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:59.624943  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:59.625001  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:59.651926  299667 cri.go:89] found id: ""
	I1205 07:48:59.651947  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.651955  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:59.651962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:59.652019  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:59.680003  299667 cri.go:89] found id: ""
	I1205 07:48:59.680080  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.680103  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:59.680120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:59.680228  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:59.705437  299667 cri.go:89] found id: ""
	I1205 07:48:59.705465  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.705474  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:59.705483  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:59.705493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:59.763111  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:59.763142  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:59.777300  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:59.777368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:59.842575  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.842643  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:59.842663  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:59.869833  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:59.869908  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:01.602955  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:04.102698  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:02.402084  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:02.412782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:02.412851  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:02.438256  299667 cri.go:89] found id: ""
	I1205 07:49:02.438279  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.438287  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:02.438294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:02.438352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:02.465899  299667 cri.go:89] found id: ""
	I1205 07:49:02.465926  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.465935  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:02.465942  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:02.466005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:02.490481  299667 cri.go:89] found id: ""
	I1205 07:49:02.490503  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.490513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:02.490519  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:02.490586  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:02.516169  299667 cri.go:89] found id: ""
	I1205 07:49:02.516196  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.516205  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:02.516211  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:02.516271  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:02.541403  299667 cri.go:89] found id: ""
	I1205 07:49:02.541429  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.541439  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:02.541445  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:02.541507  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:02.566995  299667 cri.go:89] found id: ""
	I1205 07:49:02.567017  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.567025  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:02.567032  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:02.567099  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:02.597621  299667 cri.go:89] found id: ""
	I1205 07:49:02.597644  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.597652  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:02.597657  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:02.597716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:02.628924  299667 cri.go:89] found id: ""
	I1205 07:49:02.628951  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.628960  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:02.628969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:02.628980  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:02.693315  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:02.693348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:02.707066  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:02.707162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:02.771707  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:02.771729  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:02.771742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:02.797113  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:02.797145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:06.603033  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:09.102351  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:05.326530  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:05.336990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:05.337057  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:05.360427  299667 cri.go:89] found id: ""
	I1205 07:49:05.360451  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.360460  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:05.360466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:05.360525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:05.384196  299667 cri.go:89] found id: ""
	I1205 07:49:05.384222  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.384230  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:05.384237  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:05.384299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:05.410321  299667 cri.go:89] found id: ""
	I1205 07:49:05.410344  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.410352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:05.410358  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:05.410417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:05.433726  299667 cri.go:89] found id: ""
	I1205 07:49:05.433793  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.433815  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:05.433833  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:05.433921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:05.458853  299667 cri.go:89] found id: ""
	I1205 07:49:05.458924  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.458940  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:05.458947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:05.459008  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:05.482445  299667 cri.go:89] found id: ""
	I1205 07:49:05.482514  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.482529  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:05.482538  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:05.482610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:05.507192  299667 cri.go:89] found id: ""
	I1205 07:49:05.507260  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.507282  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:05.507300  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:05.507393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:05.532405  299667 cri.go:89] found id: ""
	I1205 07:49:05.532439  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.532448  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:05.532459  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:05.532470  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:05.587713  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:05.587744  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:05.600994  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:05.601062  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:05.676675  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:05.676745  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:05.676770  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:05.700917  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:05.700948  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.230743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:08.241254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:08.241324  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:08.265687  299667 cri.go:89] found id: ""
	I1205 07:49:08.265765  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.265781  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:08.265789  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:08.265873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:08.291182  299667 cri.go:89] found id: ""
	I1205 07:49:08.291212  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.291222  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:08.291230  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:08.291288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:08.316404  299667 cri.go:89] found id: ""
	I1205 07:49:08.316431  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.316439  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:08.316446  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:08.316503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:08.342004  299667 cri.go:89] found id: ""
	I1205 07:49:08.342030  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.342038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:08.342044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:08.342103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:08.370679  299667 cri.go:89] found id: ""
	I1205 07:49:08.370700  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.370708  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:08.370715  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:08.370791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:08.398788  299667 cri.go:89] found id: ""
	I1205 07:49:08.398848  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.398880  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:08.398896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:08.398967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:08.427499  299667 cri.go:89] found id: ""
	I1205 07:49:08.427532  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.427552  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:08.427560  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:08.427627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:08.455982  299667 cri.go:89] found id: ""
	I1205 07:49:08.456008  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.456016  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:08.456025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:08.456037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:08.469660  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:08.469687  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:08.534660  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:08.534684  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:08.534697  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:08.560195  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:08.560228  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.590035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:08.590061  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:49:11.102705  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:13.103312  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:11.150392  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:11.161108  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:11.161194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:11.185243  299667 cri.go:89] found id: ""
	I1205 07:49:11.185264  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.185273  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:11.185280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:11.185338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:11.208758  299667 cri.go:89] found id: ""
	I1205 07:49:11.208797  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.208806  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:11.208815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:11.208884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:11.235054  299667 cri.go:89] found id: ""
	I1205 07:49:11.235077  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.235086  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:11.235092  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:11.235157  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:11.259045  299667 cri.go:89] found id: ""
	I1205 07:49:11.259068  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.259076  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:11.259082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:11.259143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:11.288257  299667 cri.go:89] found id: ""
	I1205 07:49:11.288282  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.288291  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:11.288298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:11.288354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:11.312884  299667 cri.go:89] found id: ""
	I1205 07:49:11.312906  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.312914  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:11.312922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:11.312978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:11.341317  299667 cri.go:89] found id: ""
	I1205 07:49:11.341340  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.341348  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:11.341354  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:11.341411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:11.365207  299667 cri.go:89] found id: ""
	I1205 07:49:11.365234  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.365243  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:11.365260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:11.365271  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:11.423587  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:11.423619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:11.437723  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:11.437796  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:11.504822  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:11.504896  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:11.504935  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:11.529753  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:11.529791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:14.059148  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:14.069586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:14.069676  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:14.103804  299667 cri.go:89] found id: ""
	I1205 07:49:14.103828  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.103837  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:14.103843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:14.103901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:14.135010  299667 cri.go:89] found id: ""
	I1205 07:49:14.135031  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.135040  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:14.135045  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:14.135104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:14.170829  299667 cri.go:89] found id: ""
	I1205 07:49:14.170851  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.170859  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:14.170865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:14.170926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:14.199693  299667 cri.go:89] found id: ""
	I1205 07:49:14.199715  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.199724  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:14.199730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:14.199789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:14.223902  299667 cri.go:89] found id: ""
	I1205 07:49:14.223924  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.223931  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:14.223937  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:14.224003  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:14.247854  299667 cri.go:89] found id: ""
	I1205 07:49:14.247926  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.247950  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:14.247969  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:14.248063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:14.272146  299667 cri.go:89] found id: ""
	I1205 07:49:14.272219  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.272250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:14.272270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:14.272375  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:14.297307  299667 cri.go:89] found id: ""
	I1205 07:49:14.297377  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.297404  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:14.297421  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:14.297436  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:14.352148  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:14.352181  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:14.365391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:14.365420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:14.429045  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:14.429068  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:14.429080  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:14.453460  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:14.453494  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:15.602762  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:17.602959  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:16.984086  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:16.994499  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:16.994567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:17.022900  299667 cri.go:89] found id: ""
	I1205 07:49:17.022923  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.022932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:17.022939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:17.022997  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:17.047244  299667 cri.go:89] found id: ""
	I1205 07:49:17.047318  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.047332  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:17.047339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:17.047415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:17.070683  299667 cri.go:89] found id: ""
	I1205 07:49:17.070716  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.070725  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:17.070732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:17.070811  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:17.104238  299667 cri.go:89] found id: ""
	I1205 07:49:17.104310  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.104332  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:17.104351  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:17.104433  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:17.130787  299667 cri.go:89] found id: ""
	I1205 07:49:17.130867  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.130890  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:17.130907  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:17.131014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:17.159177  299667 cri.go:89] found id: ""
	I1205 07:49:17.159212  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.159221  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:17.159228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:17.159293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:17.187127  299667 cri.go:89] found id: ""
	I1205 07:49:17.187148  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.187157  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:17.187168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:17.187225  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:17.214608  299667 cri.go:89] found id: ""
	I1205 07:49:17.214633  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.214641  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:17.214650  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:17.214690  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:17.227937  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:17.227964  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:17.290517  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:17.290581  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:17.290600  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:17.315039  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:17.315074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:17.343285  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:17.343348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:19.899406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:19.910597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:19.910679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:19.935640  299667 cri.go:89] found id: ""
	I1205 07:49:19.935664  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.935673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:19.935679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:19.935736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:19.959309  299667 cri.go:89] found id: ""
	I1205 07:49:19.959336  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.959345  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:19.959352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:19.959418  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:19.982862  299667 cri.go:89] found id: ""
	I1205 07:49:19.982884  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.982893  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:19.982899  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:19.982957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:20.016784  299667 cri.go:89] found id: ""
	I1205 07:49:20.016810  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.016819  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:20.016826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:20.016893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:20.044555  299667 cri.go:89] found id: ""
	I1205 07:49:20.044580  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.044590  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:20.044597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:20.044657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:20.080570  299667 cri.go:89] found id: ""
	I1205 07:49:20.080595  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.080603  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:20.080610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:20.080689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1205 07:49:20.102423  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:22.102493  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:24.602330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:20.112802  299667 cri.go:89] found id: ""
	I1205 07:49:20.112829  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.112838  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:20.112852  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:20.112912  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:20.145614  299667 cri.go:89] found id: ""
	I1205 07:49:20.145642  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.145650  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:20.145659  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:20.145670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:20.208200  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:20.208233  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:20.222391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:20.222422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:20.285471  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:20.285500  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:20.285513  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:20.311384  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:20.311415  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:22.840933  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:22.854843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:22.854939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:22.881572  299667 cri.go:89] found id: ""
	I1205 07:49:22.881598  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.881608  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:22.881614  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:22.881677  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:22.917647  299667 cri.go:89] found id: ""
	I1205 07:49:22.917677  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.917686  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:22.917692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:22.917750  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:22.943325  299667 cri.go:89] found id: ""
	I1205 07:49:22.943346  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.943355  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:22.943362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:22.943426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:22.967894  299667 cri.go:89] found id: ""
	I1205 07:49:22.967955  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.967979  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:22.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:22.968076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:22.994911  299667 cri.go:89] found id: ""
	I1205 07:49:22.994976  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.994991  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:22.994998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:22.995056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:23.022399  299667 cri.go:89] found id: ""
	I1205 07:49:23.022464  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.022486  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:23.022506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:23.022581  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:23.048262  299667 cri.go:89] found id: ""
	I1205 07:49:23.048283  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.048291  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:23.048297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:23.048355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:23.072655  299667 cri.go:89] found id: ""
	I1205 07:49:23.072684  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.072694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:23.072702  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:23.072720  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:23.132711  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:23.132742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:23.146553  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:23.146576  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:23.218207  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:23.218230  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:23.218243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:23.242426  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:23.242462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:27.102316  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:29.602939  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:25.772926  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:25.783467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:25.783546  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:25.811044  299667 cri.go:89] found id: ""
	I1205 07:49:25.811066  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.811075  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:25.811081  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:25.811139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:25.835534  299667 cri.go:89] found id: ""
	I1205 07:49:25.835558  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.835568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:25.835575  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:25.835637  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:25.866938  299667 cri.go:89] found id: ""
	I1205 07:49:25.866966  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.866974  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:25.866981  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:25.867043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:25.897273  299667 cri.go:89] found id: ""
	I1205 07:49:25.897302  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.897313  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:25.897320  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:25.897380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:25.923461  299667 cri.go:89] found id: ""
	I1205 07:49:25.923489  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.923497  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:25.923504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:25.923590  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:25.946791  299667 cri.go:89] found id: ""
	I1205 07:49:25.946813  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.946822  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:25.946828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:25.946885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:25.971479  299667 cri.go:89] found id: ""
	I1205 07:49:25.971507  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.971515  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:25.971521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:25.971580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:25.994965  299667 cri.go:89] found id: ""
	I1205 07:49:25.994986  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.994994  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:25.995003  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:25.995014  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:26.058667  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:26.058701  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:26.073089  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:26.073119  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:26.150334  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:26.150355  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:26.150367  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:26.182077  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:26.182109  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:28.710700  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:28.722142  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:28.722208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:28.749003  299667 cri.go:89] found id: ""
	I1205 07:49:28.749029  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.749037  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:28.749044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:28.749101  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:28.774112  299667 cri.go:89] found id: ""
	I1205 07:49:28.774141  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.774152  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:28.774158  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:28.774215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:28.797966  299667 cri.go:89] found id: ""
	I1205 07:49:28.797987  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.797996  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:28.798002  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:28.798058  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:28.825668  299667 cri.go:89] found id: ""
	I1205 07:49:28.825694  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.825703  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:28.825709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:28.825788  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:28.856952  299667 cri.go:89] found id: ""
	I1205 07:49:28.856986  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.857001  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:28.857008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:28.857091  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:28.882695  299667 cri.go:89] found id: ""
	I1205 07:49:28.882730  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.882746  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:28.882753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:28.882822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:28.909550  299667 cri.go:89] found id: ""
	I1205 07:49:28.909584  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.909594  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:28.909601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:28.909671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:28.942251  299667 cri.go:89] found id: ""
	I1205 07:49:28.942319  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.942340  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:28.942362  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:28.942387  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:29.005506  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:29.005539  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:29.005554  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:29.030880  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:29.030910  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:29.058353  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:29.058381  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:29.121228  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:29.121304  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:49:32.102320  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:34.103275  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:31.636506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:31.647234  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:31.647305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:31.672508  299667 cri.go:89] found id: ""
	I1205 07:49:31.672530  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.672539  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:31.672545  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:31.672603  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:31.696860  299667 cri.go:89] found id: ""
	I1205 07:49:31.696885  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.696894  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:31.696900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:31.696970  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:31.722649  299667 cri.go:89] found id: ""
	I1205 07:49:31.722676  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.722685  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:31.722692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:31.722770  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:31.748068  299667 cri.go:89] found id: ""
	I1205 07:49:31.748093  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.748101  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:31.748109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:31.748169  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:31.773290  299667 cri.go:89] found id: ""
	I1205 07:49:31.773315  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.773324  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:31.773330  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:31.773393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:31.804425  299667 cri.go:89] found id: ""
	I1205 07:49:31.804445  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.804454  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:31.804461  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:31.804521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:31.829116  299667 cri.go:89] found id: ""
	I1205 07:49:31.829137  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.829146  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:31.829152  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:31.829241  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:31.867330  299667 cri.go:89] found id: ""
	I1205 07:49:31.867406  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.867418  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:31.867427  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:31.867438  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:31.931647  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:31.931680  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:31.945211  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:31.945236  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:32.004694  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:32.004719  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:32.004738  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:32.031538  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:32.031572  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:34.562576  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:34.573366  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:34.573477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:34.599238  299667 cri.go:89] found id: ""
	I1205 07:49:34.599262  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.599272  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:34.599279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:34.599342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:34.624561  299667 cri.go:89] found id: ""
	I1205 07:49:34.624589  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.624598  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:34.624604  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:34.624666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:34.649603  299667 cri.go:89] found id: ""
	I1205 07:49:34.649624  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.649637  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:34.649644  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:34.649707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:34.674019  299667 cri.go:89] found id: ""
	I1205 07:49:34.674043  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.674052  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:34.674058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:34.674121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:34.700890  299667 cri.go:89] found id: ""
	I1205 07:49:34.700912  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.700921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:34.700928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:34.700988  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:34.727454  299667 cri.go:89] found id: ""
	I1205 07:49:34.727482  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.727491  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:34.727498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:34.727558  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:34.753086  299667 cri.go:89] found id: ""
	I1205 07:49:34.753107  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.753115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:34.753120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:34.753208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:34.779077  299667 cri.go:89] found id: ""
	I1205 07:49:34.779100  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.779109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:34.779118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:34.779129  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:34.839330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:34.839368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:34.857129  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:34.857175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:34.932420  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:34.932440  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:34.932452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:34.957616  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:34.957649  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:36.602677  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:39.102319  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:37.486529  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:37.496909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:37.496977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:37.521254  299667 cri.go:89] found id: ""
	I1205 07:49:37.521315  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.521349  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:37.521372  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:37.521462  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:37.544759  299667 cri.go:89] found id: ""
	I1205 07:49:37.544782  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.544791  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:37.544798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:37.544854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:37.569519  299667 cri.go:89] found id: ""
	I1205 07:49:37.569549  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.569558  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:37.569564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:37.569624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:37.593917  299667 cri.go:89] found id: ""
	I1205 07:49:37.593938  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.593947  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:37.593954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:37.594014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:37.619915  299667 cri.go:89] found id: ""
	I1205 07:49:37.619940  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.619949  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:37.619955  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:37.620016  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:37.647160  299667 cri.go:89] found id: ""
	I1205 07:49:37.647186  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.647195  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:37.647202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:37.647261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:37.672076  299667 cri.go:89] found id: ""
	I1205 07:49:37.672097  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.672105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:37.672111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:37.672170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:37.697550  299667 cri.go:89] found id: ""
	I1205 07:49:37.697573  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.697581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:37.697590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:37.697601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:37.754073  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:37.754105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:37.769043  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:37.769071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:37.831338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:37.831359  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:37.831371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:37.857528  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:37.857564  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:41.602800  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:44.102845  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:40.404513  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:40.415071  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:40.415143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:40.439261  299667 cri.go:89] found id: ""
	I1205 07:49:40.439283  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.439291  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:40.439298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:40.439355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:40.464063  299667 cri.go:89] found id: ""
	I1205 07:49:40.464084  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.464092  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:40.464098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:40.464158  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:40.490322  299667 cri.go:89] found id: ""
	I1205 07:49:40.490344  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.490352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:40.490359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:40.490419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:40.517055  299667 cri.go:89] found id: ""
	I1205 07:49:40.517078  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.517087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:40.517093  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:40.517151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:40.545250  299667 cri.go:89] found id: ""
	I1205 07:49:40.545273  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.545282  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:40.545288  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:40.545348  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:40.569118  299667 cri.go:89] found id: ""
	I1205 07:49:40.569142  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.569151  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:40.569188  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:40.569248  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:40.593152  299667 cri.go:89] found id: ""
	I1205 07:49:40.593209  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.593217  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:40.593223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:40.593287  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:40.617285  299667 cri.go:89] found id: ""
	I1205 07:49:40.617308  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.617316  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:40.617325  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:40.617336  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:40.681518  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:40.681540  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:40.681553  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:40.707309  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:40.707347  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:40.740118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:40.740145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:40.798971  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:40.799001  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.313313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:43.324257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:43.324337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:43.356730  299667 cri.go:89] found id: ""
	I1205 07:49:43.356755  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.356763  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:43.356770  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:43.356828  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:43.386071  299667 cri.go:89] found id: ""
	I1205 07:49:43.386097  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.386106  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:43.386112  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:43.386172  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:43.415579  299667 cri.go:89] found id: ""
	I1205 07:49:43.415606  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.415615  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:43.415621  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:43.415679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:43.441039  299667 cri.go:89] found id: ""
	I1205 07:49:43.441064  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.441075  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:43.441082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:43.441141  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:43.466399  299667 cri.go:89] found id: ""
	I1205 07:49:43.466432  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.466442  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:43.466449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:43.466519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:43.497264  299667 cri.go:89] found id: ""
	I1205 07:49:43.497309  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.497319  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:43.497326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:43.497397  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:43.522221  299667 cri.go:89] found id: ""
	I1205 07:49:43.522247  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.522256  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:43.522262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:43.522325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:43.546887  299667 cri.go:89] found id: ""
	I1205 07:49:43.546953  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.546969  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:43.546980  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:43.546992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:43.613596  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:43.613644  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.628794  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:43.628825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:43.698835  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:43.698854  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:43.698866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:43.725776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:43.725811  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:46.103222  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:48.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:46.256365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:46.267583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:46.267659  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:46.296652  299667 cri.go:89] found id: ""
	I1205 07:49:46.296679  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.296687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:46.296694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:46.296760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:46.323489  299667 cri.go:89] found id: ""
	I1205 07:49:46.323514  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.323522  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:46.323529  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:46.323593  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:46.355225  299667 cri.go:89] found id: ""
	I1205 07:49:46.355249  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.355258  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:46.355265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:46.355340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:46.383644  299667 cri.go:89] found id: ""
	I1205 07:49:46.383678  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.383687  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:46.383694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:46.383768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:46.421484  299667 cri.go:89] found id: ""
	I1205 07:49:46.421518  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.421527  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:46.421533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:46.421602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:46.447032  299667 cri.go:89] found id: ""
	I1205 07:49:46.447057  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.447066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:46.447073  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:46.447136  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:46.472839  299667 cri.go:89] found id: ""
	I1205 07:49:46.472860  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.472867  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:46.472873  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:46.472930  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:46.501395  299667 cri.go:89] found id: ""
	I1205 07:49:46.501422  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.501432  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:46.501441  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:46.501452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:46.558146  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:46.558178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:46.573118  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:46.573146  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:46.637720  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:46.637741  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:46.637754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:46.662623  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:46.662658  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.193341  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:49.204485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:49.204616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:49.235316  299667 cri.go:89] found id: ""
	I1205 07:49:49.235380  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.235403  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:49.235424  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:49.235503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:49.259781  299667 cri.go:89] found id: ""
	I1205 07:49:49.259811  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.259820  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:49.259826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:49.259894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:49.283985  299667 cri.go:89] found id: ""
	I1205 07:49:49.284025  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.284034  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:49.284041  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:49.284123  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:49.312614  299667 cri.go:89] found id: ""
	I1205 07:49:49.312643  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.312652  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:49.312659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:49.312728  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:49.338339  299667 cri.go:89] found id: ""
	I1205 07:49:49.338362  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.338371  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:49.338378  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:49.338444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:49.367532  299667 cri.go:89] found id: ""
	I1205 07:49:49.367557  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.367565  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:49.367572  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:49.367635  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:49.401925  299667 cri.go:89] found id: ""
	I1205 07:49:49.402000  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.402020  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:49.402038  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:49.402122  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:49.428942  299667 cri.go:89] found id: ""
	I1205 07:49:49.428975  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.428993  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:49.429003  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:49.429021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:49.492403  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:49.492426  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:49.492439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:49.517991  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:49.518021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.545729  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:49.545754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:49.601110  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:49.601140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:49:51.102462  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:53.103333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:52.115102  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:52.128449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:52.128522  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:52.158550  299667 cri.go:89] found id: ""
	I1205 07:49:52.158575  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.158584  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:52.158591  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:52.158654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:52.183729  299667 cri.go:89] found id: ""
	I1205 07:49:52.183750  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.183759  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:52.183765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:52.183829  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:52.209241  299667 cri.go:89] found id: ""
	I1205 07:49:52.209269  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.209279  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:52.209286  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:52.209367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:52.234457  299667 cri.go:89] found id: ""
	I1205 07:49:52.234488  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.234497  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:52.234504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:52.234568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:52.258774  299667 cri.go:89] found id: ""
	I1205 07:49:52.258799  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.258808  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:52.258815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:52.258904  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:52.284285  299667 cri.go:89] found id: ""
	I1205 07:49:52.284319  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.284329  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:52.284336  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:52.284406  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:52.311443  299667 cri.go:89] found id: ""
	I1205 07:49:52.311470  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.311479  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:52.311485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:52.311577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:52.335827  299667 cri.go:89] found id: ""
	I1205 07:49:52.335859  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.335868  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:52.335879  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:52.335890  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:52.395851  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:52.395889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:52.410419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:52.410446  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:52.478966  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:52.478997  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:52.479010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:52.504082  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:52.504114  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.031406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:55.042458  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:55.042534  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:55.066642  299667 cri.go:89] found id: ""
	I1205 07:49:55.066667  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.066677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:55.066684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:55.066746  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:49:55.602712  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:58.102265  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:55.091150  299667 cri.go:89] found id: ""
	I1205 07:49:55.091180  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.091189  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:55.091195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:55.091255  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:55.121930  299667 cri.go:89] found id: ""
	I1205 07:49:55.121951  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.121960  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:55.121965  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:55.122023  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:55.149981  299667 cri.go:89] found id: ""
	I1205 07:49:55.150058  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.150079  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:55.150097  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:55.150184  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:55.173681  299667 cri.go:89] found id: ""
	I1205 07:49:55.173704  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.173712  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:55.173718  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:55.173777  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:55.197308  299667 cri.go:89] found id: ""
	I1205 07:49:55.197332  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.197341  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:55.197347  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:55.197403  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:55.223472  299667 cri.go:89] found id: ""
	I1205 07:49:55.223493  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.223502  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:55.223508  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:55.223572  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:55.252432  299667 cri.go:89] found id: ""
	I1205 07:49:55.252457  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.252466  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:55.252474  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:55.252487  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:55.318488  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:55.318520  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:55.318533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:55.343511  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:55.343587  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.386735  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:55.386818  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:55.452457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:55.452497  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:57.966172  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:57.976919  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:57.976991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:58.003394  299667 cri.go:89] found id: ""
	I1205 07:49:58.003420  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.003429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:58.003436  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:58.003505  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:58.040382  299667 cri.go:89] found id: ""
	I1205 07:49:58.040403  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.040411  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:58.040425  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:58.040486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:58.066131  299667 cri.go:89] found id: ""
	I1205 07:49:58.066161  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.066170  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:58.066177  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:58.066236  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:58.092126  299667 cri.go:89] found id: ""
	I1205 07:49:58.092149  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.092157  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:58.092164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:58.092224  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:58.123111  299667 cri.go:89] found id: ""
	I1205 07:49:58.123138  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.123147  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:58.123154  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:58.123215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:58.155898  299667 cri.go:89] found id: ""
	I1205 07:49:58.155920  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.155929  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:58.155936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:58.156002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:58.181658  299667 cri.go:89] found id: ""
	I1205 07:49:58.181684  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.181694  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:58.181700  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:58.181760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:58.211071  299667 cri.go:89] found id: ""
	I1205 07:49:58.211093  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.211102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:58.211111  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:58.211122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:58.271505  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:58.271551  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:58.287071  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:58.287097  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:58.357627  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:58.357680  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:58.357694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:58.388703  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:58.388747  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:00.103169  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:02.602855  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:04.603343  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:00.928058  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:00.939115  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:00.939186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:00.967955  299667 cri.go:89] found id: ""
	I1205 07:50:00.967979  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.967989  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:00.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:00.968054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:00.994981  299667 cri.go:89] found id: ""
	I1205 07:50:00.995006  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.995014  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:00.995022  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:00.995081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:01.020388  299667 cri.go:89] found id: ""
	I1205 07:50:01.020412  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.020421  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:01.020427  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:01.020487  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:01.045771  299667 cri.go:89] found id: ""
	I1205 07:50:01.045796  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.045816  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:01.045839  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:01.045915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:01.072970  299667 cri.go:89] found id: ""
	I1205 07:50:01.072995  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.073004  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:01.073009  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:01.073069  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:01.110343  299667 cri.go:89] found id: ""
	I1205 07:50:01.110365  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.110374  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:01.110382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:01.110442  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:01.143588  299667 cri.go:89] found id: ""
	I1205 07:50:01.143627  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.143669  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:01.143676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:01.143734  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:01.173718  299667 cri.go:89] found id: ""
	I1205 07:50:01.173744  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.173753  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:01.173762  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:01.173775  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:01.240437  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:01.240461  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:01.240475  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:01.265849  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:01.265884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:01.295649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:01.295676  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:01.352457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:01.352493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:03.872935  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:03.884137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:03.884213  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:03.909107  299667 cri.go:89] found id: ""
	I1205 07:50:03.909129  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.909138  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:03.909144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:03.909231  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:03.935188  299667 cri.go:89] found id: ""
	I1205 07:50:03.935217  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.935229  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:03.935235  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:03.935293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:03.960991  299667 cri.go:89] found id: ""
	I1205 07:50:03.961013  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.961023  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:03.961029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:03.961087  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:03.993563  299667 cri.go:89] found id: ""
	I1205 07:50:03.993586  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.993595  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:03.993602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:03.993658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:04.022615  299667 cri.go:89] found id: ""
	I1205 07:50:04.022640  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.022650  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:04.022656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:04.022744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:04.052044  299667 cri.go:89] found id: ""
	I1205 07:50:04.052067  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.052076  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:04.052083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:04.052155  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:04.077688  299667 cri.go:89] found id: ""
	I1205 07:50:04.077766  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.077790  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:04.077798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:04.077873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:04.108745  299667 cri.go:89] found id: ""
	I1205 07:50:04.108772  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.108781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:04.108790  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:04.108806  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:04.124370  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:04.124398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:04.202708  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:04.202730  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:04.202742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:04.228486  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:04.228522  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:04.257187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:04.257214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:07.102231  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:09.102419  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:06.817489  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:06.828313  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:06.828385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:06.852373  299667 cri.go:89] found id: ""
	I1205 07:50:06.852445  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.852468  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:06.852489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:06.852557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:06.877263  299667 cri.go:89] found id: ""
	I1205 07:50:06.877291  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.877300  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:06.877306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:06.877373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:06.902856  299667 cri.go:89] found id: ""
	I1205 07:50:06.902882  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.902892  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:06.902898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:06.902962  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:06.928569  299667 cri.go:89] found id: ""
	I1205 07:50:06.928595  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.928604  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:06.928611  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:06.928689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:06.953448  299667 cri.go:89] found id: ""
	I1205 07:50:06.953481  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.953491  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:06.953498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:06.953567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:06.978486  299667 cri.go:89] found id: ""
	I1205 07:50:06.978557  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.978579  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:06.978592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:06.978653  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:07.004116  299667 cri.go:89] found id: ""
	I1205 07:50:07.004201  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.004245  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:07.004278  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:07.004369  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:07.030912  299667 cri.go:89] found id: ""
	I1205 07:50:07.030946  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.030956  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:07.030966  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:07.030995  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:07.087669  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:07.087703  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:07.102364  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:07.102424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:07.175733  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:07.175756  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:07.175768  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:07.201087  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:07.201120  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.733660  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:09.744254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:09.744322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:09.768703  299667 cri.go:89] found id: ""
	I1205 07:50:09.768725  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.768733  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:09.768740  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:09.768803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:09.792862  299667 cri.go:89] found id: ""
	I1205 07:50:09.792884  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.792892  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:09.792898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:09.792953  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:09.816998  299667 cri.go:89] found id: ""
	I1205 07:50:09.817020  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.817028  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:09.817042  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:09.817098  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:09.846103  299667 cri.go:89] found id: ""
	I1205 07:50:09.846128  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.846137  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:09.846144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:09.846215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:09.869920  299667 cri.go:89] found id: ""
	I1205 07:50:09.869943  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.869952  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:09.869958  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:09.870017  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:09.894186  299667 cri.go:89] found id: ""
	I1205 07:50:09.894207  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.894216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:09.894222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:09.894279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:09.918290  299667 cri.go:89] found id: ""
	I1205 07:50:09.918323  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.918332  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:09.918338  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:09.918404  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:09.942213  299667 cri.go:89] found id: ""
	I1205 07:50:09.942241  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.942250  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:09.942260  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:09.942300  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.971801  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:09.971827  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:10.027693  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:10.027732  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:10.042067  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:10.042095  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:11.102920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:13.602347  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:10.106137  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:10.106162  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:10.106175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.633673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:12.645469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:12.645547  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:12.676971  299667 cri.go:89] found id: ""
	I1205 07:50:12.676997  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.677007  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:12.677014  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:12.677084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:12.702338  299667 cri.go:89] found id: ""
	I1205 07:50:12.702361  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.702370  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:12.702377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:12.702436  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:12.726932  299667 cri.go:89] found id: ""
	I1205 07:50:12.726958  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.726968  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:12.726974  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:12.727054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:12.752194  299667 cri.go:89] found id: ""
	I1205 07:50:12.752231  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.752240  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:12.752246  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:12.752354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:12.777805  299667 cri.go:89] found id: ""
	I1205 07:50:12.777874  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.777897  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:12.777917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:12.777990  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:12.802215  299667 cri.go:89] found id: ""
	I1205 07:50:12.802240  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.802250  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:12.802257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:12.802334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:12.831796  299667 cri.go:89] found id: ""
	I1205 07:50:12.831821  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.831830  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:12.831836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:12.831899  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:12.856886  299667 cri.go:89] found id: ""
	I1205 07:50:12.856912  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.856921  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:12.856930  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:12.856941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:12.870323  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:12.870352  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:12.933303  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:12.933325  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:12.933339  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.958156  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:12.958191  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:12.986132  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:12.986158  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:15.602727  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:17.602807  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:15.543265  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:15.553756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:15.553824  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:15.579618  299667 cri.go:89] found id: ""
	I1205 07:50:15.579641  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.579650  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:15.579656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:15.579719  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:15.615622  299667 cri.go:89] found id: ""
	I1205 07:50:15.615646  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.615654  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:15.615660  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:15.615718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:15.648566  299667 cri.go:89] found id: ""
	I1205 07:50:15.648595  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.648604  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:15.648610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:15.648669  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:15.678106  299667 cri.go:89] found id: ""
	I1205 07:50:15.678132  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.678141  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:15.678147  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:15.678210  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:15.703125  299667 cri.go:89] found id: ""
	I1205 07:50:15.703148  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.703157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:15.703163  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:15.703229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:15.727847  299667 cri.go:89] found id: ""
	I1205 07:50:15.727873  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.727882  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:15.727889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:15.727948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:15.755105  299667 cri.go:89] found id: ""
	I1205 07:50:15.755129  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.755138  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:15.755144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:15.755203  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:15.780309  299667 cri.go:89] found id: ""
	I1205 07:50:15.780334  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.780343  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:15.780351  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:15.780362  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:15.836755  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:15.836788  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:15.850164  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:15.850241  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:15.913792  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:15.913812  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:15.913828  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:15.938310  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:15.938344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.465299  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:18.475870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:18.475939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:18.501780  299667 cri.go:89] found id: ""
	I1205 07:50:18.501806  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.501821  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:18.501828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:18.501886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:18.526890  299667 cri.go:89] found id: ""
	I1205 07:50:18.526920  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.526929  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:18.526936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:18.526996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:18.552506  299667 cri.go:89] found id: ""
	I1205 07:50:18.552531  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.552540  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:18.552546  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:18.552605  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:18.577492  299667 cri.go:89] found id: ""
	I1205 07:50:18.577517  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.577526  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:18.577533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:18.577591  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:18.609705  299667 cri.go:89] found id: ""
	I1205 07:50:18.609731  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.609740  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:18.609746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:18.609804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:18.637216  299667 cri.go:89] found id: ""
	I1205 07:50:18.637242  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.637251  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:18.637258  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:18.637315  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:18.663025  299667 cri.go:89] found id: ""
	I1205 07:50:18.663051  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.663060  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:18.663067  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:18.663145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:18.689022  299667 cri.go:89] found id: ""
	I1205 07:50:18.689086  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.689109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:18.689131  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:18.689192  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:18.703250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:18.703279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:18.768192  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:18.768211  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:18.768223  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:18.793554  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:18.793585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.828893  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:18.828920  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:20.102540  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:22.602506  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:24.602962  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:21.385309  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:21.397376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:21.397451  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:21.424618  299667 cri.go:89] found id: ""
	I1205 07:50:21.424642  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.424652  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:21.424659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:21.424717  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:21.451181  299667 cri.go:89] found id: ""
	I1205 07:50:21.451202  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.451211  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:21.451217  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:21.451275  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:21.475206  299667 cri.go:89] found id: ""
	I1205 07:50:21.475228  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.475237  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:21.475243  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:21.475300  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:21.505637  299667 cri.go:89] found id: ""
	I1205 07:50:21.505663  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.505672  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:21.505679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:21.505738  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:21.534466  299667 cri.go:89] found id: ""
	I1205 07:50:21.534541  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.534557  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:21.534579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:21.534644  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:21.560428  299667 cri.go:89] found id: ""
	I1205 07:50:21.560453  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.560462  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:21.560472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:21.560530  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:21.584825  299667 cri.go:89] found id: ""
	I1205 07:50:21.584852  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.584860  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:21.584867  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:21.584934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:21.623066  299667 cri.go:89] found id: ""
	I1205 07:50:21.623093  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.623102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:21.623112  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:21.623127  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:21.687398  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:21.687435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:21.702122  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:21.702149  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:21.767031  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:21.767050  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:21.767063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:21.791862  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:21.791895  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.321349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:24.331708  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:24.331778  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:24.369231  299667 cri.go:89] found id: ""
	I1205 07:50:24.369255  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.369264  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:24.369270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:24.369345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:24.397058  299667 cri.go:89] found id: ""
	I1205 07:50:24.397078  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.397088  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:24.397094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:24.397152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:24.425233  299667 cri.go:89] found id: ""
	I1205 07:50:24.425256  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.425264  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:24.425271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:24.425325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:24.451011  299667 cri.go:89] found id: ""
	I1205 07:50:24.451032  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.451041  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:24.451047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:24.451103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:24.475249  299667 cri.go:89] found id: ""
	I1205 07:50:24.475278  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.475287  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:24.475294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:24.475352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:24.500860  299667 cri.go:89] found id: ""
	I1205 07:50:24.500885  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.500895  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:24.500911  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:24.500969  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:24.525728  299667 cri.go:89] found id: ""
	I1205 07:50:24.525751  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.525771  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:24.525778  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:24.525839  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:24.549854  299667 cri.go:89] found id: ""
	I1205 07:50:24.549877  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.549885  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:24.549894  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:24.549923  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:24.574340  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:24.574371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.609821  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:24.609850  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:24.668879  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:24.668917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:24.683025  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:24.683052  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:24.745503  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:27.102442  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:29.102897  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:27.247317  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:27.258551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:27.258627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:27.282556  299667 cri.go:89] found id: ""
	I1205 07:50:27.282584  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.282594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:27.282601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:27.282685  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:27.311566  299667 cri.go:89] found id: ""
	I1205 07:50:27.311593  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.311602  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:27.311608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:27.311666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:27.336201  299667 cri.go:89] found id: ""
	I1205 07:50:27.336226  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.336235  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:27.336241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:27.336295  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:27.374655  299667 cri.go:89] found id: ""
	I1205 07:50:27.374733  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.374756  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:27.374804  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:27.374881  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:27.403358  299667 cri.go:89] found id: ""
	I1205 07:50:27.403381  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.403390  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:27.403396  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:27.403453  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:27.434322  299667 cri.go:89] found id: ""
	I1205 07:50:27.434347  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.434355  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:27.434362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:27.434430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:27.458621  299667 cri.go:89] found id: ""
	I1205 07:50:27.458643  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.458651  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:27.458669  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:27.458726  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:27.487490  299667 cri.go:89] found id: ""
	I1205 07:50:27.487514  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.487524  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:27.487532  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:27.487543  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:27.515434  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:27.515462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:27.574832  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:27.574864  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:27.588186  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:27.588210  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:27.666339  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:27.666400  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:27.666420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:50:31.602443  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:34.102266  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:30.192057  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:30.203579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:30.203657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:30.233613  299667 cri.go:89] found id: ""
	I1205 07:50:30.233663  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.233673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:30.233680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:30.233739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:30.262491  299667 cri.go:89] found id: ""
	I1205 07:50:30.262517  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.262526  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:30.262532  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:30.262599  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:30.292006  299667 cri.go:89] found id: ""
	I1205 07:50:30.292031  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.292042  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:30.292078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:30.292134  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:30.317938  299667 cri.go:89] found id: ""
	I1205 07:50:30.317963  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.317972  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:30.317979  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:30.318037  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:30.359844  299667 cri.go:89] found id: ""
	I1205 07:50:30.359871  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.359880  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:30.359887  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:30.359946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:30.391160  299667 cri.go:89] found id: ""
	I1205 07:50:30.391187  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.391196  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:30.391202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:30.391256  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:30.424091  299667 cri.go:89] found id: ""
	I1205 07:50:30.424116  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.424124  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:30.424131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:30.424186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:30.449137  299667 cri.go:89] found id: ""
	I1205 07:50:30.449184  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.449193  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:30.449204  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:30.449216  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:30.477964  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:30.477990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:30.535174  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:30.535208  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:30.548511  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:30.548537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:30.611856  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:30.611880  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:30.611892  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.137527  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:33.148376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:33.148457  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:33.173779  299667 cri.go:89] found id: ""
	I1205 07:50:33.173802  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.173810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:33.173816  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:33.173893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:33.198637  299667 cri.go:89] found id: ""
	I1205 07:50:33.198661  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.198671  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:33.198678  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:33.198739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:33.227950  299667 cri.go:89] found id: ""
	I1205 07:50:33.227972  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.227980  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:33.227986  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:33.228056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:33.252400  299667 cri.go:89] found id: ""
	I1205 07:50:33.252434  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.252446  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:33.252454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:33.252528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:33.277287  299667 cri.go:89] found id: ""
	I1205 07:50:33.277311  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.277320  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:33.277326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:33.277384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:33.303260  299667 cri.go:89] found id: ""
	I1205 07:50:33.303285  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.303294  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:33.303310  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:33.303387  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:33.327837  299667 cri.go:89] found id: ""
	I1205 07:50:33.327860  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.327868  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:33.327875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:33.327934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:33.361138  299667 cri.go:89] found id: ""
	I1205 07:50:33.361196  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.361206  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:33.361216  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:33.361227  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:33.439490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:33.439534  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:33.454134  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:33.454201  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:33.519248  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:33.519324  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:33.519346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.544362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:33.544404  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:36.102706  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:38.602248  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:36.073913  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:36.085180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:36.085254  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:36.111524  299667 cri.go:89] found id: ""
	I1205 07:50:36.111549  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.111558  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:36.111565  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:36.111624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:36.136758  299667 cri.go:89] found id: ""
	I1205 07:50:36.136832  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.136856  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:36.136874  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:36.136999  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:36.170081  299667 cri.go:89] found id: ""
	I1205 07:50:36.170105  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.170113  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:36.170120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:36.170177  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:36.194713  299667 cri.go:89] found id: ""
	I1205 07:50:36.194738  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.194747  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:36.194753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:36.194817  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:36.219168  299667 cri.go:89] found id: ""
	I1205 07:50:36.219190  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.219199  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:36.219205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:36.219272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:36.243582  299667 cri.go:89] found id: ""
	I1205 07:50:36.243653  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.243676  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:36.243694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:36.243775  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:36.268659  299667 cri.go:89] found id: ""
	I1205 07:50:36.268730  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.268754  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:36.268771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:36.268853  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:36.293268  299667 cri.go:89] found id: ""
	I1205 07:50:36.293338  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.293361  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:36.293383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:36.293416  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:36.372932  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:36.372960  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:36.372972  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:36.400267  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:36.400358  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:36.432348  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:36.432371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:36.488499  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:36.488533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.002493  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:39.016301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:39.016371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:39.041723  299667 cri.go:89] found id: ""
	I1205 07:50:39.041799  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.041815  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:39.041823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:39.041885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:39.066151  299667 cri.go:89] found id: ""
	I1205 07:50:39.066174  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.066183  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:39.066189  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:39.066266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:39.090650  299667 cri.go:89] found id: ""
	I1205 07:50:39.090673  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.090682  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:39.090688  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:39.090745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:39.119700  299667 cri.go:89] found id: ""
	I1205 07:50:39.119732  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.119740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:39.119747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:39.119810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:39.144307  299667 cri.go:89] found id: ""
	I1205 07:50:39.144369  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.144389  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:39.144406  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:39.144488  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:39.171025  299667 cri.go:89] found id: ""
	I1205 07:50:39.171048  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.171057  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:39.171063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:39.171127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:39.195100  299667 cri.go:89] found id: ""
	I1205 07:50:39.195121  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.195130  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:39.195136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:39.195197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:39.218959  299667 cri.go:89] found id: ""
	I1205 07:50:39.218980  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.218991  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:39.219000  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:39.219010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:39.243315  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:39.243346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:39.270633  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:39.270709  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:39.330141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:39.330172  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.345855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:39.345883  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:39.426940  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:40.603240  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:43.103156  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:41.928763  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:41.939293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:41.939415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:41.964816  299667 cri.go:89] found id: ""
	I1205 07:50:41.964850  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.964859  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:41.964865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:41.964931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:41.990880  299667 cri.go:89] found id: ""
	I1205 07:50:41.990914  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.990923  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:41.990929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:41.990996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:42.022456  299667 cri.go:89] found id: ""
	I1205 07:50:42.022483  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.022494  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:42.022501  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:42.022570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:42.049261  299667 cri.go:89] found id: ""
	I1205 07:50:42.049328  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.049352  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:42.049369  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:42.049446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:42.077034  299667 cri.go:89] found id: ""
	I1205 07:50:42.077108  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.077134  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:42.077255  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:42.077338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:42.114881  299667 cri.go:89] found id: ""
	I1205 07:50:42.114910  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.114921  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:42.114928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:42.114994  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:42.151897  299667 cri.go:89] found id: ""
	I1205 07:50:42.151926  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.151936  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:42.151944  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:42.152012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:42.185532  299667 cri.go:89] found id: ""
	I1205 07:50:42.185556  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.185565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:42.185574  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:42.185585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:42.246490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:42.246537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:42.262324  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:42.262359  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:42.331135  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:42.331201  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:42.331219  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:42.358803  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:42.358836  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:44.909321  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:44.920001  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:44.920070  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:44.945367  299667 cri.go:89] found id: ""
	I1205 07:50:44.945392  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.945401  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:44.945407  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:44.945463  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:44.970751  299667 cri.go:89] found id: ""
	I1205 07:50:44.970779  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.970788  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:44.970794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:44.970873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:44.999654  299667 cri.go:89] found id: ""
	I1205 07:50:44.999678  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.999688  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:44.999694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:44.999760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:45.065387  299667 cri.go:89] found id: ""
	I1205 07:50:45.065496  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.065521  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:45.065554  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:45.065661  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1205 07:50:45.105072  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:47.602920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:45.101338  299667 cri.go:89] found id: ""
	I1205 07:50:45.101365  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.101375  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:45.101386  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:45.101459  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:45.140148  299667 cri.go:89] found id: ""
	I1205 07:50:45.140181  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.140192  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:45.140200  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:45.140301  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:45.178981  299667 cri.go:89] found id: ""
	I1205 07:50:45.179025  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.179035  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:45.179043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:45.179176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:45.219922  299667 cri.go:89] found id: ""
	I1205 07:50:45.219949  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.219958  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:45.219969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:45.219989  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:45.291787  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:45.291824  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:45.306539  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:45.306565  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:45.383110  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:45.383171  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:45.383206  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:45.410722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:45.410808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:47.941304  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:47.952011  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:47.952084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:47.978179  299667 cri.go:89] found id: ""
	I1205 07:50:47.978201  299667 logs.go:282] 0 containers: []
	W1205 07:50:47.978210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:47.978216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:47.978274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:48.005927  299667 cri.go:89] found id: ""
	I1205 07:50:48.005954  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.005964  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:48.005971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:48.006042  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:48.040049  299667 cri.go:89] found id: ""
	I1205 07:50:48.040133  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.040156  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:48.040175  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:48.040269  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:48.066524  299667 cri.go:89] found id: ""
	I1205 07:50:48.066549  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.066558  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:48.066564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:48.066627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:48.096997  299667 cri.go:89] found id: ""
	I1205 07:50:48.097026  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.097036  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:48.097043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:48.097103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:48.123968  299667 cri.go:89] found id: ""
	I1205 07:50:48.123990  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.123999  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:48.124005  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:48.124066  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:48.151529  299667 cri.go:89] found id: ""
	I1205 07:50:48.151554  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.151564  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:48.151570  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:48.151629  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:48.181245  299667 cri.go:89] found id: ""
	I1205 07:50:48.181270  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.181279  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:48.181297  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:48.181308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:48.240786  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:48.240832  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:48.255504  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:48.255533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:48.325828  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:48.325849  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:48.325862  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:48.350818  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:48.350898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:50.103331  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:52.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:50.887376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:50.898712  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:50.898787  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:50.926387  299667 cri.go:89] found id: ""
	I1205 07:50:50.926412  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.926421  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:50.926428  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:50.926499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:50.951318  299667 cri.go:89] found id: ""
	I1205 07:50:50.951341  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.951349  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:50.951356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:50.951431  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:50.978509  299667 cri.go:89] found id: ""
	I1205 07:50:50.978536  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.978545  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:50.978551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:50.978614  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:51.017851  299667 cri.go:89] found id: ""
	I1205 07:50:51.017875  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.017884  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:51.017894  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:51.017957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:51.048705  299667 cri.go:89] found id: ""
	I1205 07:50:51.048772  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.048797  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:51.048815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:51.048901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:51.078364  299667 cri.go:89] found id: ""
	I1205 07:50:51.078427  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.078448  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:51.078468  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:51.078560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:51.110914  299667 cri.go:89] found id: ""
	I1205 07:50:51.110955  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.110965  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:51.110970  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:51.111064  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:51.136737  299667 cri.go:89] found id: ""
	I1205 07:50:51.136762  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.136771  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:51.136781  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:51.136793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:51.197928  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:51.197949  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:51.197961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:51.222938  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:51.222968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:51.253887  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:51.253914  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:51.309729  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:51.309759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:53.824280  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:53.834821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:53.834895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:53.882567  299667 cri.go:89] found id: ""
	I1205 07:50:53.882607  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.882617  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:53.882623  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:53.882708  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:53.924413  299667 cri.go:89] found id: ""
	I1205 07:50:53.924439  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.924447  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:53.924454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:53.924521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:53.949296  299667 cri.go:89] found id: ""
	I1205 07:50:53.949329  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.949339  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:53.949345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:53.949421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:53.973974  299667 cri.go:89] found id: ""
	I1205 07:50:53.974036  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.974050  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:53.974058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:53.974114  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:53.999073  299667 cri.go:89] found id: ""
	I1205 07:50:53.999139  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.999154  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:53.999162  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:53.999221  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:54.026401  299667 cri.go:89] found id: ""
	I1205 07:50:54.026425  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.026434  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:54.026441  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:54.026523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:54.056156  299667 cri.go:89] found id: ""
	I1205 07:50:54.056181  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.056191  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:54.056197  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:54.056266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:54.080916  299667 cri.go:89] found id: ""
	I1205 07:50:54.080955  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.080964  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:54.080973  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:54.080985  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:54.105836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:54.105870  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:54.134673  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:54.134702  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:54.191141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:54.191175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:54.204290  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:54.204332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:54.267087  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:55.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:57.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:59.602402  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:56.768821  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:56.779222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:56.779288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:56.807155  299667 cri.go:89] found id: ""
	I1205 07:50:56.807179  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.807188  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:56.807195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:56.807280  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:56.831710  299667 cri.go:89] found id: ""
	I1205 07:50:56.831737  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.831746  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:56.831753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:56.831812  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:56.867145  299667 cri.go:89] found id: ""
	I1205 07:50:56.867169  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.867178  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:56.867185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:56.867243  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:56.893127  299667 cri.go:89] found id: ""
	I1205 07:50:56.893152  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.893174  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:56.893180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:56.893237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:56.922421  299667 cri.go:89] found id: ""
	I1205 07:50:56.922450  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.922460  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:56.922466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:56.922543  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:56.945778  299667 cri.go:89] found id: ""
	I1205 07:50:56.945808  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.945817  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:56.945823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:56.945907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:56.974442  299667 cri.go:89] found id: ""
	I1205 07:50:56.974473  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.974482  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:56.974489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:56.974559  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:56.998662  299667 cri.go:89] found id: ""
	I1205 07:50:56.998685  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.998694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:56.998703  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:56.998715  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:57.058833  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:57.058867  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:57.072293  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:57.072322  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:57.139010  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:57.139030  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:57.139042  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:57.163607  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:57.163639  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.693334  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:59.704756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:59.704870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:59.732171  299667 cri.go:89] found id: ""
	I1205 07:50:59.732198  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.732208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:59.732214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:59.732272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:59.757954  299667 cri.go:89] found id: ""
	I1205 07:50:59.757981  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.757990  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:59.757996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:59.758076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:59.787824  299667 cri.go:89] found id: ""
	I1205 07:50:59.787846  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.787855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:59.787862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:59.787977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:59.813474  299667 cri.go:89] found id: ""
	I1205 07:50:59.813497  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.813506  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:59.813512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:59.813580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:59.842057  299667 cri.go:89] found id: ""
	I1205 07:50:59.842079  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.842088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:59.842094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:59.842162  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:59.872569  299667 cri.go:89] found id: ""
	I1205 07:50:59.872593  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.872602  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:59.872608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:59.872671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:59.905410  299667 cri.go:89] found id: ""
	I1205 07:50:59.905435  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.905443  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:59.905450  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:59.905514  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:59.932703  299667 cri.go:89] found id: ""
	I1205 07:50:59.932744  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.932754  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:59.932763  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:59.932774  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.964043  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:59.964069  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:00.020877  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:00.023486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:00.055130  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:00.055166  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:02.102411  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:04.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:00.182237  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:00.182280  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:00.182298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:02.739834  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:02.750886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:02.750958  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:02.776293  299667 cri.go:89] found id: ""
	I1205 07:51:02.776319  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.776328  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:02.776334  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:02.776393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:02.803043  299667 cri.go:89] found id: ""
	I1205 07:51:02.803080  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.803089  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:02.803096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:02.803176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:02.827935  299667 cri.go:89] found id: ""
	I1205 07:51:02.827957  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.827966  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:02.827972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:02.828031  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:02.859181  299667 cri.go:89] found id: ""
	I1205 07:51:02.859204  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.859215  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:02.859222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:02.859282  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:02.893626  299667 cri.go:89] found id: ""
	I1205 07:51:02.893668  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.893678  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:02.893685  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:02.893755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:02.924778  299667 cri.go:89] found id: ""
	I1205 07:51:02.924808  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.924818  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:02.924830  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:02.924890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:02.950184  299667 cri.go:89] found id: ""
	I1205 07:51:02.950211  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.950220  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:02.950229  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:02.950288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:02.976829  299667 cri.go:89] found id: ""
	I1205 07:51:02.976855  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.976865  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:02.976874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:02.976885  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:03.015998  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:03.016071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:03.072438  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:03.072473  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:03.087250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:03.087283  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:03.153281  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:03.153306  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:03.153319  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:51:07.103249  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:09.602341  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:05.678289  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:05.688964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:05.689032  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:05.714382  299667 cri.go:89] found id: ""
	I1205 07:51:05.714403  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.714412  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:05.714419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:05.714486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:05.743946  299667 cri.go:89] found id: ""
	I1205 07:51:05.743968  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.743976  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:05.743983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:05.744043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:05.768270  299667 cri.go:89] found id: ""
	I1205 07:51:05.768293  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.768303  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:05.768309  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:05.768367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:05.795557  299667 cri.go:89] found id: ""
	I1205 07:51:05.795580  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.795588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:05.795595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:05.795652  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:05.820607  299667 cri.go:89] found id: ""
	I1205 07:51:05.820634  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.820643  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:05.820649  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:05.820707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:05.853624  299667 cri.go:89] found id: ""
	I1205 07:51:05.853648  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.853657  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:05.853670  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:05.853752  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:05.885144  299667 cri.go:89] found id: ""
	I1205 07:51:05.885200  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.885213  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:05.885219  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:05.885296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:05.917755  299667 cri.go:89] found id: ""
	I1205 07:51:05.917777  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.917785  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:05.917794  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:05.917808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:05.978242  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:05.978286  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:05.992931  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:05.992961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:06.070949  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:06.070979  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:06.070992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:06.096749  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:06.096780  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.634532  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:08.646959  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:08.647038  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:08.678851  299667 cri.go:89] found id: ""
	I1205 07:51:08.678875  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.678884  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:08.678890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:08.678954  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:08.702970  299667 cri.go:89] found id: ""
	I1205 07:51:08.702992  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.703001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:08.703006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:08.703063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:08.727238  299667 cri.go:89] found id: ""
	I1205 07:51:08.727259  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.727267  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:08.727273  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:08.727329  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:08.752084  299667 cri.go:89] found id: ""
	I1205 07:51:08.752106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.752114  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:08.752120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:08.752183  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:08.775775  299667 cri.go:89] found id: ""
	I1205 07:51:08.775797  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.775805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:08.775811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:08.775878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:08.800101  299667 cri.go:89] found id: ""
	I1205 07:51:08.800122  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.800130  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:08.800136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:08.800193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:08.826081  299667 cri.go:89] found id: ""
	I1205 07:51:08.826106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.826115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:08.826121  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:08.826179  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:08.850937  299667 cri.go:89] found id: ""
	I1205 07:51:08.850969  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.850979  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:08.850987  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:08.851004  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.884057  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:08.884093  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:08.946750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:08.946793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:08.960852  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:08.960880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:09.030565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:09.030587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:09.030601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:51:11.602638  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:12.602298  297527 node_ready.go:38] duration metric: took 6m0.000452624s for node "no-preload-241270" to be "Ready" ...
	I1205 07:51:12.605551  297527 out.go:203] 
	W1205 07:51:12.608371  297527 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 07:51:12.608388  297527 out.go:285] * 
	W1205 07:51:12.610554  297527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:51:12.612665  297527 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.467880478Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.467944766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468010408Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468070889Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468137597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468209844Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468274591Z" level=info msg="runtime interface created"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468329098Z" level=info msg="created NRI interface"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468386092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468477137Z" level=info msg="Connect containerd service"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468834006Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.469689958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.479649538Z" level=info msg="Start subscribing containerd event"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.479732066Z" level=info msg="Start recovering state"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.480037743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.480474506Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497682696Z" level=info msg="Start event monitor"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497734635Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497745409Z" level=info msg="Start streaming server"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497758537Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497768318Z" level=info msg="runtime interface starting up..."
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497774849Z" level=info msg="starting plugins..."
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497803961Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.498055853Z" level=info msg="containerd successfully booted in 0.055465s"
	Dec 05 07:45:10 no-preload-241270 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:14.670928    3902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:14.671758    3902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:14.673490    3902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:14.673790    3902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:14.675192    3902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:51:14 up  2:33,  0 user,  load average: 0.74, 0.76, 1.30
	Linux no-preload-241270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:51:11 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:11 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 05 07:51:11 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:11 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:11 no-preload-241270 kubelet[3782]: E1205 07:51:11.910312    3782 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:11 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:11 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:12 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 05 07:51:12 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:12 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:12 no-preload-241270 kubelet[3787]: E1205 07:51:12.686396    3787 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:12 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:12 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:13 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 05 07:51:13 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:13 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:13 no-preload-241270 kubelet[3808]: E1205 07:51:13.402650    3808 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:13 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:13 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:14 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 05 07:51:14 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:14 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:14 no-preload-241270 kubelet[3813]: E1205 07:51:14.145640    3813 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:14 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:14 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 2 (374.520369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (372.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (375.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1205 07:47:16.967716    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:48:01.797759    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:48:11.309139    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:49:14.019784    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:50:06.311393    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m10.283804071s)

                                                
                                                
-- stdout --
	* [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:45:25.089760  299667 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:25.090022  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090052  299667 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:25.090069  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090384  299667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:45:25.090842  299667 out.go:368] Setting JSON to false
	I1205 07:45:25.091806  299667 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8872,"bootTime":1764911853,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:45:25.091916  299667 start.go:143] virtualization:  
	I1205 07:45:25.094988  299667 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:25.098817  299667 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:25.098909  299667 notify.go:221] Checking for updates...
	I1205 07:45:25.105041  299667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:25.108085  299667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:25.111075  299667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:45:25.114070  299667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:25.117093  299667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:25.120796  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:25.121387  299667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:25.146702  299667 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:25.146810  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.201970  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.192879595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.202086  299667 docker.go:319] overlay module found
	I1205 07:45:25.205420  299667 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:25.208200  299667 start.go:309] selected driver: docker
	I1205 07:45:25.208216  299667 start.go:927] validating driver "docker" against &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.208322  299667 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:25.209018  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.271889  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.262935561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.272253  299667 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:45:25.272290  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:25.272360  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:25.272408  299667 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.275549  299667 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:45:25.278335  299667 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:45:25.281398  299667 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:25.284371  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:25.284526  299667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:25.304420  299667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:25.304443  299667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:45:25.350688  299667 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:45:25.522612  299667 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:45:25.522872  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.522902  299667 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.522986  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:45:25.522997  299667 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.314µs
	I1205 07:45:25.523010  299667 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:45:25.523020  299667 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523050  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:45:25.523054  299667 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.177µs
	I1205 07:45:25.523060  299667 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523070  299667 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523108  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:45:25.523117  299667 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.906µs
	I1205 07:45:25.523123  299667 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523137  299667 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523144  299667 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:25.523164  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:45:25.523170  299667 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 37.867µs
	I1205 07:45:25.523176  299667 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523180  299667 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523184  299667 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523220  299667 start.go:364] duration metric: took 26.043µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:45:25.523232  299667 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:25.523223  299667 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523248  299667 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:45:25.523282  299667 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.595µs
	I1205 07:45:25.523288  299667 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:45:25.523289  299667 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523237  299667 fix.go:54] fixHost starting: 
	I1205 07:45:25.523319  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:45:25.523328  299667 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 144.182µs
	I1205 07:45:25.523335  299667 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523296  299667 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 85.228µs
	I1205 07:45:25.523346  299667 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:45:25.523368  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:45:25.523373  299667 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 85.498µs
	I1205 07:45:25.523378  299667 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:45:25.523390  299667 cache.go:87] Successfully saved all images to host disk.
	I1205 07:45:25.523585  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.542111  299667 fix.go:112] recreateIfNeeded on newest-cni-622440: state=Stopped err=<nil>
	W1205 07:45:25.542142  299667 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:45:25.545608  299667 out.go:252] * Restarting existing docker container for "newest-cni-622440" ...
	I1205 07:45:25.545717  299667 cli_runner.go:164] Run: docker start newest-cni-622440
	I1205 07:45:25.826053  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.856383  299667 kic.go:430] container "newest-cni-622440" state is running.
	I1205 07:45:25.856775  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:25.877321  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.877542  299667 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:25.878047  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:25.903226  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:25.903553  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:25.903561  299667 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:25.904107  299667 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35184->127.0.0.1:33103: read: connection reset by peer
	I1205 07:45:29.056730  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.056754  299667 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:45:29.056818  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.074923  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.075238  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.075256  299667 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:45:29.238817  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.238924  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.256394  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.256698  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.256720  299667 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:29.409360  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:29.409384  299667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:45:29.409403  299667 ubuntu.go:190] setting up certificates
	I1205 07:45:29.409412  299667 provision.go:84] configureAuth start
	I1205 07:45:29.409469  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:29.426522  299667 provision.go:143] copyHostCerts
	I1205 07:45:29.426598  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:45:29.426610  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:45:29.426695  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:45:29.426806  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:45:29.426817  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:45:29.426846  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:45:29.426910  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:45:29.426920  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:45:29.426946  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:45:29.427008  299667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:45:29.583992  299667 provision.go:177] copyRemoteCerts
	I1205 07:45:29.584079  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:29.584142  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.601241  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.705331  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:45:29.723929  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:45:29.741035  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:45:29.758654  299667 provision.go:87] duration metric: took 349.219709ms to configureAuth
	I1205 07:45:29.758682  299667 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:29.758882  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:29.758893  299667 machine.go:97] duration metric: took 3.881342431s to provisionDockerMachine
	I1205 07:45:29.758901  299667 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:45:29.758917  299667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:29.758966  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:29.759008  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.777016  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.881927  299667 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:29.889885  299667 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:29.889915  299667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:29.889927  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:45:29.889986  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:45:29.890075  299667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:45:29.890181  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:29.899716  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:29.920554  299667 start.go:296] duration metric: took 161.628343ms for postStartSetup
	I1205 07:45:29.920647  299667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:29.920717  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.938834  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.040045  299667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:30.045649  299667 fix.go:56] duration metric: took 4.522402293s for fixHost
	I1205 07:45:30.045683  299667 start.go:83] releasing machines lock for "newest-cni-622440", held for 4.522453444s
	I1205 07:45:30.045767  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:30.065623  299667 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:30.065678  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.065694  299667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:30.065761  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.087940  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.099183  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.281502  299667 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:30.288110  299667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:30.292481  299667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:30.292550  299667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:30.300562  299667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:30.300584  299667 start.go:496] detecting cgroup driver to use...
	I1205 07:45:30.300616  299667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:30.300666  299667 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:45:30.318364  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:45:30.332088  299667 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:30.332151  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:30.348258  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:30.361775  299667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:30.469361  299667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:30.577441  299667 docker.go:234] disabling docker service ...
	I1205 07:45:30.577508  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:30.592915  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:30.607578  299667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:30.752107  299667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:30.872747  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:30.888408  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:30.904134  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:45:30.914385  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:45:30.923315  299667 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:45:30.923423  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:45:30.932175  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.940943  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:45:30.949729  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.958228  299667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:30.965941  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:45:30.980042  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:45:30.995740  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:45:31.009747  299667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:31.019595  299667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:31.028525  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.153254  299667 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:45:31.252043  299667 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:45:31.252123  299667 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:45:31.255724  299667 start.go:564] Will wait 60s for crictl version
	I1205 07:45:31.255784  299667 ssh_runner.go:195] Run: which crictl
	I1205 07:45:31.259402  299667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:31.288033  299667 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:45:31.288102  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.310723  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.334839  299667 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:45:31.337671  299667 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:31.359874  299667 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:31.365663  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.387524  299667 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:45:31.390412  299667 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:31.390547  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:31.390648  299667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:31.429142  299667 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:45:31.429206  299667 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:31.429215  299667 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:45:31.429338  299667 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:31.429419  299667 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:45:31.463460  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:31.463487  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:31.463511  299667 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:45:31.463580  299667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:31.463714  299667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:31.463789  299667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:45:31.471606  299667 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:31.471702  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:31.480080  299667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:45:31.492950  299667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:45:31.505530  299667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:45:31.518323  299667 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:31.521961  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.531618  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.655593  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:31.673339  299667 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:45:31.673398  299667 certs.go:195] generating shared ca certs ...
	I1205 07:45:31.673427  299667 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:31.673592  299667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:45:31.673665  299667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:45:31.673695  299667 certs.go:257] generating profile certs ...
	I1205 07:45:31.673812  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:45:31.673907  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:45:31.673970  299667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:45:31.674103  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:45:31.674164  299667 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:31.674197  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:31.674246  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:45:31.674289  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:31.674341  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:31.674413  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:31.675038  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:31.699874  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:31.718981  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:31.739011  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:31.757897  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:45:31.776123  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:31.794286  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:31.815714  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:45:31.832875  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:31.851417  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:45:31.868401  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:45:31.885858  299667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:31.898468  299667 ssh_runner.go:195] Run: openssl version
	I1205 07:45:31.904594  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.911851  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:45:31.919124  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922684  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922758  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.963682  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:31.970739  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.977808  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:31.985046  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988699  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988790  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:32.029966  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:32.037736  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.045196  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:45:32.052663  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056573  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056689  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.097976  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:32.106452  299667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:32.110712  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:32.154012  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:32.194946  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:32.235499  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:32.276192  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:32.316778  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:32.357969  299667 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:32.358063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:32.358128  299667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:32.393923  299667 cri.go:89] found id: ""
	I1205 07:45:32.393993  299667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:32.401825  299667 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:32.401893  299667 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:32.401977  299667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:32.409190  299667 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:32.409869  299667 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.410186  299667 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-622440" cluster setting kubeconfig missing "newest-cni-622440" context setting]
	I1205 07:45:32.410754  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.412652  299667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:32.420082  299667 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1205 07:45:32.420112  299667 kubeadm.go:602] duration metric: took 18.200733ms to restartPrimaryControlPlane
	I1205 07:45:32.420122  299667 kubeadm.go:403] duration metric: took 62.162615ms to StartCluster
	I1205 07:45:32.420136  299667 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.420193  299667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.421089  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.421340  299667 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:45:32.421617  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:32.421690  299667 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:32.421796  299667 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-622440"
	I1205 07:45:32.421816  299667 addons.go:70] Setting default-storageclass=true in profile "newest-cni-622440"
	I1205 07:45:32.421860  299667 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-622440"
	I1205 07:45:32.421826  299667 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-622440"
	I1205 07:45:32.421949  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.422169  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.422375  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.421807  299667 addons.go:70] Setting dashboard=true in profile "newest-cni-622440"
	I1205 07:45:32.422859  299667 addons.go:239] Setting addon dashboard=true in "newest-cni-622440"
	W1205 07:45:32.422869  299667 addons.go:248] addon dashboard should already be in state true
	I1205 07:45:32.422895  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.423306  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.425911  299667 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:32.429270  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:32.459552  299667 addons.go:239] Setting addon default-storageclass=true in "newest-cni-622440"
	I1205 07:45:32.459590  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.459994  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.466676  299667 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:45:32.469573  299667 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:45:32.469693  299667 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.469710  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:45:32.469779  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.479022  299667 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1205 07:45:32.484603  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:45:32.484629  299667 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:45:32.484694  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.517396  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.529599  299667 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.529620  299667 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:45:32.529685  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.549325  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.574838  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.643911  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:32.670090  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.687313  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:45:32.687343  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:45:32.721498  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:45:32.721518  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:45:32.728026  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.759870  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:45:32.759892  299667 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:45:32.773100  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:45:32.773119  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:45:32.790813  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:45:32.790887  299667 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:45:32.806943  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:45:32.807008  299667 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:45:32.827525  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:45:32.827547  299667 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:45:32.840144  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:45:32.840166  299667 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:45:32.856122  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:32.856196  299667 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:45:32.869771  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:33.097468  299667 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:45:33.097593  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:33.097728  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097794  299667 retry.go:31] will retry after 241.658936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.097872  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097907  299667 retry.go:31] will retry after 176.603947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.098118  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.098157  299667 retry.go:31] will retry after 229.408257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.275635  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:33.328106  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.333654  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.333699  299667 retry.go:31] will retry after 493.072495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.339842  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:33.420976  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421140  299667 retry.go:31] will retry after 232.443098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.421103  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421275  299667 retry.go:31] will retry after 218.243264ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.598377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:33.640183  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:33.654611  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.714507  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.714586  299667 retry.go:31] will retry after 296.021108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.735889  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.735929  299667 retry.go:31] will retry after 647.569018ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.827334  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:33.912321  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.912410  299667 retry.go:31] will retry after 511.925432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.011792  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:34.070223  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.070270  299667 retry.go:31] will retry after 1.045041767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.098366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:34.384609  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:34.425097  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:34.456662  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.456771  299667 retry.go:31] will retry after 1.012360732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:34.490780  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.490815  299667 retry.go:31] will retry after 673.94662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.598028  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:35.097803  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:35.115652  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:35.165241  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:35.189445  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.189528  299667 retry.go:31] will retry after 873.335351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:35.234071  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.234107  299667 retry.go:31] will retry after 1.250813401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.469343  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:35.535355  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.535386  299667 retry.go:31] will retry after 1.457971594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.598793  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.063166  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:36.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:36.141912  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.141992  299667 retry.go:31] will retry after 1.289648417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.485696  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:36.544841  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.544879  299667 retry.go:31] will retry after 2.662984572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.598226  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.993607  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.063691  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.063774  299667 retry.go:31] will retry after 1.151172803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.098032  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:37.431865  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:37.492142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.492177  299667 retry.go:31] will retry after 3.504601193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.598357  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.098363  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.215346  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:38.274274  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.274309  299667 retry.go:31] will retry after 1.757329115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.597749  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.097719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.208847  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:39.266142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.266182  299667 retry.go:31] will retry after 3.436463849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.598395  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.031973  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:40.092374  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.092409  299667 retry.go:31] will retry after 2.182976597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.098469  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.598422  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.997583  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:41.059423  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.059455  299667 retry.go:31] will retry after 3.560419221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.098613  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:41.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.098453  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.276211  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:42.351488  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.351524  299667 retry.go:31] will retry after 9.602308898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.598167  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.703420  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:42.760290  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.760322  299667 retry.go:31] will retry after 5.381602643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:43.097810  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:43.597706  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.098335  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.597780  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.620405  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:44.677458  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.677489  299667 retry.go:31] will retry after 4.279612118s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:45.098273  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:45.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.597868  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.097740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.597768  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.097748  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.142199  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:48.202751  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.202784  299667 retry.go:31] will retry after 9.130347643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.958075  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:49.020580  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.020664  299667 retry.go:31] will retry after 5.816091686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:49.597778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:50.097903  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:50.598277  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.098323  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.598320  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.954438  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:52.018482  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.018522  299667 retry.go:31] will retry after 11.887626777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.098608  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:52.598374  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.098377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.098330  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.597906  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.837992  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:54.928421  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:54.928451  299667 retry.go:31] will retry after 21.232814528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:55.097998  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:55.598566  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.098233  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.598487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.333368  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:57.391373  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.391409  299667 retry.go:31] will retry after 6.534046571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.598447  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.098487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.597673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.098584  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.597752  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.111473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.597738  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.097860  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.597786  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.598349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.097778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.906517  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:03.926085  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:03.977088  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:03.977126  299667 retry.go:31] will retry after 8.615984736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.014857  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.014953  299667 retry.go:31] will retry after 11.096851447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.098074  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:04.598727  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:05.098302  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:05.598378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.098313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.098365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.597739  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.597740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.098581  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.598396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:10.098145  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:10.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.097819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.598431  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.098421  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.593706  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:12.598498  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:12.687257  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:12.687290  299667 retry.go:31] will retry after 19.919210015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:13.098633  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:13.598345  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.097716  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.598398  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:15.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:15.112618  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:15.170666  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.170700  299667 retry.go:31] will retry after 26.586504873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.598228  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.161584  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:16.224162  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.224193  299667 retry.go:31] will retry after 29.423350117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.597722  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.097721  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.597743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.098656  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.598271  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.098404  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.598719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:20.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:20.597725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.097770  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.598319  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.097718  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.098368  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.598400  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.098708  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.597766  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.098393  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.598238  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.098573  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.598365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.598524  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.097726  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.598366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:30.098021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:30.598337  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.098378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.097725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.597622  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:32.597702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:32.607176  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:32.654366  299667 cri.go:89] found id: ""
	I1205 07:46:32.654387  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.654395  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:32.654402  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:32.654460  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:46:32.707430  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707464  299667 retry.go:31] will retry after 35.686554771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707503  299667 cri.go:89] found id: ""
	I1205 07:46:32.707512  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.707519  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:32.707525  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:32.707583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:32.732319  299667 cri.go:89] found id: ""
	I1205 07:46:32.732341  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.732350  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:32.732356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:32.732414  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:32.756204  299667 cri.go:89] found id: ""
	I1205 07:46:32.756226  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.756235  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:32.756241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:32.756313  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:32.785401  299667 cri.go:89] found id: ""
	I1205 07:46:32.785423  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.785431  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:32.785437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:32.785493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:32.811348  299667 cri.go:89] found id: ""
	I1205 07:46:32.811373  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.811381  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:32.811388  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:32.811461  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:32.835578  299667 cri.go:89] found id: ""
	I1205 07:46:32.835603  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.835612  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:32.835618  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:32.835679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:32.861749  299667 cri.go:89] found id: ""
	I1205 07:46:32.861773  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.861781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:32.861790  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:32.861801  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:32.937533  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:32.937555  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:32.937568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:32.962127  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:32.962161  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:32.989223  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:32.989256  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:33.046092  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:33.046128  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:35.559882  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:35.570602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:35.570679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:35.597322  299667 cri.go:89] found id: ""
	I1205 07:46:35.597348  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.597358  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:35.597364  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:35.597420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:35.631556  299667 cri.go:89] found id: ""
	I1205 07:46:35.631585  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.631594  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:35.631605  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:35.631670  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:35.666766  299667 cri.go:89] found id: ""
	I1205 07:46:35.666790  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.666808  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:35.666851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:35.666928  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:35.696469  299667 cri.go:89] found id: ""
	I1205 07:46:35.696494  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.696503  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:35.696510  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:35.696570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:35.721564  299667 cri.go:89] found id: ""
	I1205 07:46:35.721587  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.721613  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:35.721620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:35.721679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:35.750450  299667 cri.go:89] found id: ""
	I1205 07:46:35.750474  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.750483  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:35.750490  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:35.750577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:35.779075  299667 cri.go:89] found id: ""
	I1205 07:46:35.779097  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.779105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:35.779111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:35.779171  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:35.804778  299667 cri.go:89] found id: ""
	I1205 07:46:35.804849  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.804870  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:35.804891  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:35.804928  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:35.818664  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:35.818691  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:35.896985  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:35.897010  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:35.897023  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:35.922964  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:35.922997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:35.950985  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:35.951012  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.510773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:38.521214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:38.521283  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:38.547037  299667 cri.go:89] found id: ""
	I1205 07:46:38.547061  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.547069  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:38.547088  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:38.547152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:38.571870  299667 cri.go:89] found id: ""
	I1205 07:46:38.571894  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.571903  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:38.571909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:38.571967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:38.597667  299667 cri.go:89] found id: ""
	I1205 07:46:38.597693  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.597701  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:38.597707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:38.597781  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:38.634302  299667 cri.go:89] found id: ""
	I1205 07:46:38.634328  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.634336  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:38.634343  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:38.634411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:38.662787  299667 cri.go:89] found id: ""
	I1205 07:46:38.662813  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.662822  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:38.662829  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:38.662886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:38.688000  299667 cri.go:89] found id: ""
	I1205 07:46:38.688026  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.688034  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:38.688040  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:38.688108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:38.712589  299667 cri.go:89] found id: ""
	I1205 07:46:38.712611  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.712619  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:38.712631  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:38.712688  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:38.736469  299667 cri.go:89] found id: ""
	I1205 07:46:38.736490  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.736499  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:38.736507  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:38.736521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:38.763556  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:38.763586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.818344  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:38.818379  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:38.832020  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:38.832054  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:38.931143  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:38.931164  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:38.931178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:41.457376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:41.468655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:41.468729  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:41.496317  299667 cri.go:89] found id: ""
	I1205 07:46:41.496391  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.496415  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:41.496434  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:41.496520  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:41.522205  299667 cri.go:89] found id: ""
	I1205 07:46:41.522230  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.522238  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:41.522244  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:41.522304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:41.547643  299667 cri.go:89] found id: ""
	I1205 07:46:41.547668  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.547677  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:41.547684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:41.547743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:41.576000  299667 cri.go:89] found id: ""
	I1205 07:46:41.576024  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.576032  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:41.576039  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:41.576093  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:41.610347  299667 cri.go:89] found id: ""
	I1205 07:46:41.610373  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.610393  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:41.610399  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:41.610455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:41.641947  299667 cri.go:89] found id: ""
	I1205 07:46:41.641974  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.641983  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:41.641990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:41.642049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:41.680331  299667 cri.go:89] found id: ""
	I1205 07:46:41.680355  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.680363  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:41.680370  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:41.680426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:41.707279  299667 cri.go:89] found id: ""
	I1205 07:46:41.707301  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.707310  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:41.707319  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:41.707331  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:41.720629  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:41.720654  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 07:46:41.757919  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:41.789558  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:41.789582  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:41.789596  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:41.829441  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.829475  299667 retry.go:31] will retry after 23.380573162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.840285  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:41.840316  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:41.875962  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:41.875990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.439978  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:44.450947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:44.451025  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:44.476311  299667 cri.go:89] found id: ""
	I1205 07:46:44.476335  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.476344  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:44.476350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:44.476420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:44.501030  299667 cri.go:89] found id: ""
	I1205 07:46:44.501064  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.501073  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:44.501078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:44.501138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:44.525674  299667 cri.go:89] found id: ""
	I1205 07:46:44.525697  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.525705  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:44.525711  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:44.525769  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:44.554878  299667 cri.go:89] found id: ""
	I1205 07:46:44.554903  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.554911  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:44.554918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:44.554991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:44.579773  299667 cri.go:89] found id: ""
	I1205 07:46:44.579796  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.579805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:44.579811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:44.579867  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:44.611991  299667 cri.go:89] found id: ""
	I1205 07:46:44.612017  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.612042  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:44.612049  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:44.612108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:44.646395  299667 cri.go:89] found id: ""
	I1205 07:46:44.646418  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.646427  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:44.646433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:44.646499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:44.674148  299667 cri.go:89] found id: ""
	I1205 07:46:44.674170  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.674178  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:44.674187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:44.674199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.734427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:44.734469  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:44.748531  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:44.748561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:44.815565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:44.815586  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:44.815601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:44.841456  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:44.841492  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:45.648666  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:45.706769  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:45.706803  299667 retry.go:31] will retry after 32.901994647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:47.381509  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:47.392949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:47.393065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:47.424033  299667 cri.go:89] found id: ""
	I1205 07:46:47.424057  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.424066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:47.424072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:47.424140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:47.451239  299667 cri.go:89] found id: ""
	I1205 07:46:47.451265  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.451275  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:47.451282  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:47.451342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:47.475229  299667 cri.go:89] found id: ""
	I1205 07:46:47.475250  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.475259  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:47.475265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:47.475322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:47.500010  299667 cri.go:89] found id: ""
	I1205 07:46:47.500036  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.500045  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:47.500051  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:47.500110  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:47.525665  299667 cri.go:89] found id: ""
	I1205 07:46:47.525691  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.525700  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:47.525707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:47.525767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:47.550876  299667 cri.go:89] found id: ""
	I1205 07:46:47.550902  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.550911  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:47.550917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:47.550978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:47.574838  299667 cri.go:89] found id: ""
	I1205 07:46:47.574904  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.574926  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:47.574940  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:47.575018  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:47.606672  299667 cri.go:89] found id: ""
	I1205 07:46:47.606698  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.606707  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:47.606716  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:47.606728  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:47.644360  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:47.644388  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:47.706982  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:47.707019  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:47.720731  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:47.720759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:47.782357  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:47.782378  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:47.782393  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:50.307630  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:50.318086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:50.318159  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:50.342816  299667 cri.go:89] found id: ""
	I1205 07:46:50.342838  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.342847  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:50.342853  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:50.342921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:50.371375  299667 cri.go:89] found id: ""
	I1205 07:46:50.371440  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.371462  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:50.371478  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:50.371566  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:50.401098  299667 cri.go:89] found id: ""
	I1205 07:46:50.401206  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.401224  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:50.401245  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:50.401310  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:50.432101  299667 cri.go:89] found id: ""
	I1205 07:46:50.432134  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.432143  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:50.432149  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:50.432262  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:50.457371  299667 cri.go:89] found id: ""
	I1205 07:46:50.457396  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.457405  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:50.457413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:50.457469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:50.486796  299667 cri.go:89] found id: ""
	I1205 07:46:50.486821  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.486830  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:50.486836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:50.486945  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:50.515505  299667 cri.go:89] found id: ""
	I1205 07:46:50.515529  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.515537  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:50.515544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:50.515606  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:50.543462  299667 cri.go:89] found id: ""
	I1205 07:46:50.543486  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.543495  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:50.543503  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:50.543561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:50.600091  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:50.600276  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:50.619872  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:50.619944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:50.690141  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:50.690160  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:50.690173  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:50.715362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:50.715398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:53.244467  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:53.256174  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:53.256240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:53.279782  299667 cri.go:89] found id: ""
	I1205 07:46:53.279803  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.279810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:53.279817  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:53.279878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:53.303793  299667 cri.go:89] found id: ""
	I1205 07:46:53.303813  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.303821  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:53.303827  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:53.303884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:53.332886  299667 cri.go:89] found id: ""
	I1205 07:46:53.332908  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.332916  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:53.332922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:53.332981  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:53.359130  299667 cri.go:89] found id: ""
	I1205 07:46:53.359153  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.359161  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:53.359168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:53.359229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:53.384922  299667 cri.go:89] found id: ""
	I1205 07:46:53.384947  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.384966  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:53.384972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:53.385033  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:53.409882  299667 cri.go:89] found id: ""
	I1205 07:46:53.409903  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.409912  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:53.409918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:53.409982  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:53.435229  299667 cri.go:89] found id: ""
	I1205 07:46:53.435254  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.435263  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:53.435269  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:53.435326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:53.460378  299667 cri.go:89] found id: ""
	I1205 07:46:53.460402  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.460411  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:53.460419  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:53.460430  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:53.515653  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:53.515686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:53.529252  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:53.529277  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:53.590407  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:53.590427  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:53.590439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:53.615638  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:53.615670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:56.149491  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:56.160491  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:56.160560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:56.186032  299667 cri.go:89] found id: ""
	I1205 07:46:56.186055  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.186063  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:56.186069  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:56.186127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:56.210655  299667 cri.go:89] found id: ""
	I1205 07:46:56.210683  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.210691  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:56.210698  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:56.210760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:56.236968  299667 cri.go:89] found id: ""
	I1205 07:46:56.237039  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.237060  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:56.237078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:56.237197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:56.261470  299667 cri.go:89] found id: ""
	I1205 07:46:56.261543  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.261559  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:56.261567  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:56.261626  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:56.287544  299667 cri.go:89] found id: ""
	I1205 07:46:56.287569  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.287578  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:56.287586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:56.287664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:56.313083  299667 cri.go:89] found id: ""
	I1205 07:46:56.313154  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.313200  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:56.313222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:56.313290  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:56.338841  299667 cri.go:89] found id: ""
	I1205 07:46:56.338865  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.338879  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:56.338886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:56.338971  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:56.364821  299667 cri.go:89] found id: ""
	I1205 07:46:56.364883  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.364906  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:56.364927  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:56.364953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:56.421380  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:56.421412  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:56.434797  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:56.434825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:56.500557  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:56.500579  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:56.500592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:56.525423  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:56.525453  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.059925  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:59.070350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:59.070417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:59.106211  299667 cri.go:89] found id: ""
	I1205 07:46:59.106234  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.106242  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:59.106250  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:59.106308  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:59.134075  299667 cri.go:89] found id: ""
	I1205 07:46:59.134101  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.134110  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:59.134116  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:59.134173  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:59.163091  299667 cri.go:89] found id: ""
	I1205 07:46:59.163119  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.163128  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:59.163134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:59.163195  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:59.189283  299667 cri.go:89] found id: ""
	I1205 07:46:59.189308  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.189316  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:59.189323  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:59.189384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:59.214391  299667 cri.go:89] found id: ""
	I1205 07:46:59.214416  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.214433  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:59.214439  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:59.214498  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:59.246223  299667 cri.go:89] found id: ""
	I1205 07:46:59.246246  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.246255  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:59.246262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:59.246321  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:59.274955  299667 cri.go:89] found id: ""
	I1205 07:46:59.274991  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.274999  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:59.275006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:59.275074  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:59.302932  299667 cri.go:89] found id: ""
	I1205 07:46:59.302956  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.302965  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:59.302984  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:59.302997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:59.362548  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:59.362571  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:59.362583  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:59.387053  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:59.387085  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.413739  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:59.413767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:59.469532  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:59.469569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:01.983455  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:01.994190  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:01.994316  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:02.023883  299667 cri.go:89] found id: ""
	I1205 07:47:02.023913  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.023922  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:02.023929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:02.023992  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:02.050293  299667 cri.go:89] found id: ""
	I1205 07:47:02.050367  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.050383  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:02.050390  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:02.050458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:02.076131  299667 cri.go:89] found id: ""
	I1205 07:47:02.076157  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.076166  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:02.076172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:02.076235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:02.115590  299667 cri.go:89] found id: ""
	I1205 07:47:02.115623  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.115632  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:02.115638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:02.115733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:02.155255  299667 cri.go:89] found id: ""
	I1205 07:47:02.155281  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.155290  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:02.155297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:02.155355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:02.184142  299667 cri.go:89] found id: ""
	I1205 07:47:02.184169  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.184178  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:02.184185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:02.184244  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:02.208969  299667 cri.go:89] found id: ""
	I1205 07:47:02.208997  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.209006  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:02.209036  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:02.209126  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:02.233523  299667 cri.go:89] found id: ""
	I1205 07:47:02.233556  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.233565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:02.233597  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:02.233609  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:02.289818  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:02.289852  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:02.303686  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:02.303756  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:02.370663  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:02.370711  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:02.370723  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:02.395466  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:02.395508  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:04.925546  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:04.937771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:04.937866  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:04.967009  299667 cri.go:89] found id: ""
	I1205 07:47:04.967031  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.967039  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:04.967046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:04.967103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:04.998327  299667 cri.go:89] found id: ""
	I1205 07:47:04.998351  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.998360  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:04.998365  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:04.998426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:05.026478  299667 cri.go:89] found id: ""
	I1205 07:47:05.026505  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.026513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:05.026521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:05.026583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:05.051556  299667 cri.go:89] found id: ""
	I1205 07:47:05.051580  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.051588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:05.051595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:05.051658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:05.078546  299667 cri.go:89] found id: ""
	I1205 07:47:05.078570  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.078579  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:05.078585  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:05.078649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:05.107928  299667 cri.go:89] found id: ""
	I1205 07:47:05.107955  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.107964  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:05.107971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:05.108035  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:05.134695  299667 cri.go:89] found id: ""
	I1205 07:47:05.134718  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.134727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:05.134733  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:05.134792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:05.160991  299667 cri.go:89] found id: ""
	I1205 07:47:05.161017  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.161025  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:05.161035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:05.161048  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:05.211053  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:47:05.219354  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:05.219426  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:05.274067  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:05.274165  299667 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:05.274831  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:05.274851  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:05.336443  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:05.336473  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:05.336486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:05.361343  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:05.361374  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:07.887800  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:07.899185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:07.899259  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:07.927401  299667 cri.go:89] found id: ""
	I1205 07:47:07.927423  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.927431  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:07.927437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:07.927511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:07.958986  299667 cri.go:89] found id: ""
	I1205 07:47:07.959008  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.959017  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:07.959023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:07.959081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:07.986953  299667 cri.go:89] found id: ""
	I1205 07:47:07.986974  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.986983  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:07.986989  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:07.987052  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:08.013548  299667 cri.go:89] found id: ""
	I1205 07:47:08.013573  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.013581  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:08.013590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:08.013654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:08.039626  299667 cri.go:89] found id: ""
	I1205 07:47:08.039650  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.039658  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:08.039664  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:08.039724  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:08.064448  299667 cri.go:89] found id: ""
	I1205 07:47:08.064472  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.064482  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:08.064489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:08.064548  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:08.089144  299667 cri.go:89] found id: ""
	I1205 07:47:08.089234  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.089250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:08.089257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:08.089325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:08.124837  299667 cri.go:89] found id: ""
	I1205 07:47:08.124863  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.124890  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:08.124900  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:08.124917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:08.155028  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:08.155055  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:08.215310  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:08.215346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:08.229549  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:08.229577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:08.292266  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:08.292296  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:08.292309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:08.394608  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:47:08.457975  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:08.458074  299667 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:10.816831  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:10.827471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:10.827537  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:10.856590  299667 cri.go:89] found id: ""
	I1205 07:47:10.856612  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.856621  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:10.856626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:10.856687  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:10.887186  299667 cri.go:89] found id: ""
	I1205 07:47:10.887207  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.887215  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:10.887221  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:10.887279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:10.914460  299667 cri.go:89] found id: ""
	I1205 07:47:10.914482  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.914490  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:10.914497  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:10.914554  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:10.943070  299667 cri.go:89] found id: ""
	I1205 07:47:10.943095  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.943103  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:10.943109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:10.943167  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:10.967007  299667 cri.go:89] found id: ""
	I1205 07:47:10.967034  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.967043  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:10.967050  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:10.967142  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:10.990367  299667 cri.go:89] found id: ""
	I1205 07:47:10.990394  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.990402  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:10.990408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:10.990465  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:11.021515  299667 cri.go:89] found id: ""
	I1205 07:47:11.021538  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.021547  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:11.021553  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:11.021616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:11.046137  299667 cri.go:89] found id: ""
	I1205 07:47:11.046159  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.046168  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:11.046176  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:11.046190  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:11.071756  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:11.071787  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:11.101757  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:11.101784  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:11.175924  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:11.175962  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:11.190392  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:11.190424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:11.252655  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:13.753819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:13.764287  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:13.764373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:13.790393  299667 cri.go:89] found id: ""
	I1205 07:47:13.790418  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.790426  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:13.790433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:13.790496  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:13.814911  299667 cri.go:89] found id: ""
	I1205 07:47:13.814935  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.814944  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:13.814951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:13.815007  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:13.839756  299667 cri.go:89] found id: ""
	I1205 07:47:13.839779  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.839787  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:13.839794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:13.839852  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:13.870908  299667 cri.go:89] found id: ""
	I1205 07:47:13.870933  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.870943  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:13.870949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:13.871010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:13.902182  299667 cri.go:89] found id: ""
	I1205 07:47:13.902208  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.902216  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:13.902223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:13.902281  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:13.928077  299667 cri.go:89] found id: ""
	I1205 07:47:13.928102  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.928111  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:13.928117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:13.928174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:13.952673  299667 cri.go:89] found id: ""
	I1205 07:47:13.952706  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.952715  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:13.952721  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:13.952786  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:13.982104  299667 cri.go:89] found id: ""
	I1205 07:47:13.982137  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.982147  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:13.982156  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:13.982168  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:14.047894  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:14.047925  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:14.061830  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:14.061861  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:14.145569  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:14.145587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:14.145601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:14.173369  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:14.173406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:16.701890  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:16.712471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:16.712541  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:16.737364  299667 cri.go:89] found id: ""
	I1205 07:47:16.737386  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.737394  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:16.737400  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:16.737458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:16.761826  299667 cri.go:89] found id: ""
	I1205 07:47:16.761849  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.761858  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:16.761864  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:16.761921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:16.787321  299667 cri.go:89] found id: ""
	I1205 07:47:16.787343  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.787352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:16.787359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:16.787419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:16.812059  299667 cri.go:89] found id: ""
	I1205 07:47:16.812080  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.812087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:16.812094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:16.812152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:16.835710  299667 cri.go:89] found id: ""
	I1205 07:47:16.835731  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.835739  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:16.835745  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:16.835804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:16.866817  299667 cri.go:89] found id: ""
	I1205 07:47:16.866839  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.866848  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:16.866854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:16.866915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:16.892855  299667 cri.go:89] found id: ""
	I1205 07:47:16.892877  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.892885  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:16.892891  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:16.892948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:16.921328  299667 cri.go:89] found id: ""
	I1205 07:47:16.921348  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.921356  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:16.921365  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:16.921378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:16.975810  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:16.975843  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:16.989559  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:16.989589  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:17.052011  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:17.052031  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:17.052044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:17.076823  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:17.076853  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:18.609402  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:47:18.686960  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:18.687059  299667 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:18.690290  299667 out.go:179] * Enabled addons: 
	I1205 07:47:18.693172  299667 addons.go:530] duration metric: took 1m46.271465904s for enable addons: enabled=[]
	I1205 07:47:19.612423  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:19.623124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:19.623194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:19.651237  299667 cri.go:89] found id: ""
	I1205 07:47:19.651260  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.651268  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:19.651276  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:19.651338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:19.679760  299667 cri.go:89] found id: ""
	I1205 07:47:19.679781  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.679790  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:19.679795  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:19.679854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:19.703620  299667 cri.go:89] found id: ""
	I1205 07:47:19.703640  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.703652  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:19.703658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:19.703731  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:19.727543  299667 cri.go:89] found id: ""
	I1205 07:47:19.727607  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.727629  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:19.727645  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:19.727736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:19.751580  299667 cri.go:89] found id: ""
	I1205 07:47:19.751606  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.751614  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:19.751620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:19.751678  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:19.778033  299667 cri.go:89] found id: ""
	I1205 07:47:19.778058  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.778066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:19.778074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:19.778130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:19.805321  299667 cri.go:89] found id: ""
	I1205 07:47:19.805346  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.805354  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:19.805360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:19.805419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:19.828911  299667 cri.go:89] found id: ""
	I1205 07:47:19.828932  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.828940  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:19.828949  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:19.828961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:19.842046  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:19.842072  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:19.924477  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:19.924542  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:19.924568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:19.949241  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:19.949279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:19.977260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:19.977287  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:22.534572  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:22.545193  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:22.545272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:22.570057  299667 cri.go:89] found id: ""
	I1205 07:47:22.570083  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.570092  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:22.570098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:22.570163  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:22.595296  299667 cri.go:89] found id: ""
	I1205 07:47:22.595321  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.595330  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:22.595337  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:22.595421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:22.620283  299667 cri.go:89] found id: ""
	I1205 07:47:22.620307  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.620315  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:22.620322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:22.620399  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:22.644353  299667 cri.go:89] found id: ""
	I1205 07:47:22.644379  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.644389  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:22.644395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:22.644474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:22.674856  299667 cri.go:89] found id: ""
	I1205 07:47:22.674885  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.674894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:22.674900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:22.674980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:22.699975  299667 cri.go:89] found id: ""
	I1205 07:47:22.700002  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.700011  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:22.700018  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:22.700089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:22.725706  299667 cri.go:89] found id: ""
	I1205 07:47:22.725734  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.725743  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:22.725753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:22.725822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:22.750409  299667 cri.go:89] found id: ""
	I1205 07:47:22.750430  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.750439  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:22.750459  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:22.750471  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:22.775719  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:22.775754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:22.806148  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:22.806175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:22.863750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:22.863786  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:22.878145  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:22.878174  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:22.945284  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:25.446099  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:25.457267  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:25.457345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:25.484246  299667 cri.go:89] found id: ""
	I1205 07:47:25.484273  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.484282  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:25.484289  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:25.484346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:25.513783  299667 cri.go:89] found id: ""
	I1205 07:47:25.513806  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.513815  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:25.513821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:25.513895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:25.542603  299667 cri.go:89] found id: ""
	I1205 07:47:25.542627  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.542636  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:25.542642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:25.542768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:25.566393  299667 cri.go:89] found id: ""
	I1205 07:47:25.566417  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.566427  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:25.566433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:25.566510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:25.591113  299667 cri.go:89] found id: ""
	I1205 07:47:25.591148  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.591157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:25.591164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:25.591237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:25.619895  299667 cri.go:89] found id: ""
	I1205 07:47:25.619919  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.619928  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:25.619935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:25.619991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:25.645287  299667 cri.go:89] found id: ""
	I1205 07:47:25.645311  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.645319  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:25.645326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:25.645386  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:25.670944  299667 cri.go:89] found id: ""
	I1205 07:47:25.670967  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.670975  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:25.671025  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:25.671043  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:25.728687  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:25.728721  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:25.743347  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:25.743373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:25.808046  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:25.808069  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:25.808082  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:25.833265  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:25.833298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:28.366360  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:28.378460  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:28.378539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:28.413651  299667 cri.go:89] found id: ""
	I1205 07:47:28.413678  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.413687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:28.413694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:28.413755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:28.439196  299667 cri.go:89] found id: ""
	I1205 07:47:28.439223  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.439232  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:28.439238  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:28.439323  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:28.463516  299667 cri.go:89] found id: ""
	I1205 07:47:28.463587  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.463610  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:28.463628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:28.463709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:28.489425  299667 cri.go:89] found id: ""
	I1205 07:47:28.489450  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.489459  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:28.489467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:28.489560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:28.516772  299667 cri.go:89] found id: ""
	I1205 07:47:28.516797  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.516806  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:28.516812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:28.516872  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:28.543466  299667 cri.go:89] found id: ""
	I1205 07:47:28.543490  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.543498  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:28.543507  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:28.543564  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:28.568431  299667 cri.go:89] found id: ""
	I1205 07:47:28.568455  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.568463  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:28.568469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:28.568528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:28.593549  299667 cri.go:89] found id: ""
	I1205 07:47:28.593573  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.593581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:28.593590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:28.593601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:28.652330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:28.652364  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:28.665857  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:28.665882  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:28.733864  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:28.733886  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:28.733898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:28.758935  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:28.758971  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:31.286625  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:31.297007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:31.297075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:31.324486  299667 cri.go:89] found id: ""
	I1205 07:47:31.324508  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.324517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:31.324523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:31.324585  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:31.367211  299667 cri.go:89] found id: ""
	I1205 07:47:31.367234  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.367242  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:31.367249  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:31.367336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:31.398063  299667 cri.go:89] found id: ""
	I1205 07:47:31.398124  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.398148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:31.398166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:31.398239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:31.430255  299667 cri.go:89] found id: ""
	I1205 07:47:31.430280  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.430288  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:31.430303  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:31.430362  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:31.455188  299667 cri.go:89] found id: ""
	I1205 07:47:31.455213  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.455222  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:31.455228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:31.455304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:31.483709  299667 cri.go:89] found id: ""
	I1205 07:47:31.483734  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.483743  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:31.483754  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:31.483841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:31.511054  299667 cri.go:89] found id: ""
	I1205 07:47:31.511081  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.511090  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:31.511096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:31.511154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:31.536168  299667 cri.go:89] found id: ""
	I1205 07:47:31.536193  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.536202  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:31.536211  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:31.536222  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:31.592031  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:31.592066  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:31.606480  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:31.606506  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:31.673271  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:31.673294  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:31.673309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:31.699030  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:31.699063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:34.230473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:34.241086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:34.241182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:34.266354  299667 cri.go:89] found id: ""
	I1205 07:47:34.266377  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.266386  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:34.266393  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:34.266455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:34.295281  299667 cri.go:89] found id: ""
	I1205 07:47:34.295304  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.295313  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:34.295322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:34.295381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:34.320096  299667 cri.go:89] found id: ""
	I1205 07:47:34.320119  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.320127  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:34.320134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:34.320193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:34.351699  299667 cri.go:89] found id: ""
	I1205 07:47:34.351769  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.351778  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:34.351785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:34.351890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:34.384621  299667 cri.go:89] found id: ""
	I1205 07:47:34.384643  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.384651  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:34.384658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:34.384716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:34.416183  299667 cri.go:89] found id: ""
	I1205 07:47:34.416209  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.416217  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:34.416225  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:34.416303  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:34.442818  299667 cri.go:89] found id: ""
	I1205 07:47:34.442843  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.442852  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:34.442859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:34.442926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:34.467574  299667 cri.go:89] found id: ""
	I1205 07:47:34.467600  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.467608  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:34.467618  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:34.467630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:34.525566  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:34.525599  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:34.538971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:34.539003  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:34.603104  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:34.603123  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:34.603135  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:34.627990  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:34.628024  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:37.156741  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:37.168917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:37.168986  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:37.194896  299667 cri.go:89] found id: ""
	I1205 07:47:37.194920  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.194929  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:37.194935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:37.194996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:37.220279  299667 cri.go:89] found id: ""
	I1205 07:47:37.220316  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.220324  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:37.220331  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:37.220402  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:37.244728  299667 cri.go:89] found id: ""
	I1205 07:47:37.244759  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.244768  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:37.244774  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:37.244838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:37.269770  299667 cri.go:89] found id: ""
	I1205 07:47:37.269794  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.269802  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:37.269809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:37.269865  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:37.296343  299667 cri.go:89] found id: ""
	I1205 07:47:37.296367  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.296376  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:37.296382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:37.296444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:37.321553  299667 cri.go:89] found id: ""
	I1205 07:47:37.321576  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.321585  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:37.321592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:37.321651  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:37.356802  299667 cri.go:89] found id: ""
	I1205 07:47:37.356824  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.356834  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:37.356841  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:37.356901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:37.384475  299667 cri.go:89] found id: ""
	I1205 07:47:37.384497  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.384505  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:37.384513  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:37.384524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:37.451184  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:37.451220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:37.465508  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:37.465535  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:37.531461  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:37.531483  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:37.531495  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:37.556492  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:37.556531  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.084953  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:40.099166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:40.099240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:40.129037  299667 cri.go:89] found id: ""
	I1205 07:47:40.129058  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.129066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:40.129074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:40.129147  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:40.166711  299667 cri.go:89] found id: ""
	I1205 07:47:40.166735  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.166743  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:40.166752  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:40.166813  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:40.192959  299667 cri.go:89] found id: ""
	I1205 07:47:40.192982  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.192991  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:40.192998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:40.193056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:40.218168  299667 cri.go:89] found id: ""
	I1205 07:47:40.218193  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.218202  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:40.218208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:40.218292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:40.243397  299667 cri.go:89] found id: ""
	I1205 07:47:40.243420  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.243428  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:40.243435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:40.243510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:40.268685  299667 cri.go:89] found id: ""
	I1205 07:47:40.268710  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.268718  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:40.268725  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:40.268802  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:40.294417  299667 cri.go:89] found id: ""
	I1205 07:47:40.294443  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.294452  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:40.294480  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:40.294561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:40.321495  299667 cri.go:89] found id: ""
	I1205 07:47:40.321556  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.321570  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:40.321580  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:40.321592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.360106  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:40.360133  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:40.420594  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:40.420627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:40.437302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:40.437332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:40.503821  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:40.503843  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:40.503855  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.028974  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:43.039847  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:43.039922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:43.066179  299667 cri.go:89] found id: ""
	I1205 07:47:43.066202  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.066210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:43.066216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:43.066274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:43.092504  299667 cri.go:89] found id: ""
	I1205 07:47:43.092528  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.092536  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:43.092543  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:43.092610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:43.124060  299667 cri.go:89] found id: ""
	I1205 07:47:43.124086  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.124095  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:43.124102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:43.124166  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:43.154063  299667 cri.go:89] found id: ""
	I1205 07:47:43.154089  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.154098  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:43.154104  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:43.154174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:43.185231  299667 cri.go:89] found id: ""
	I1205 07:47:43.185255  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.185264  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:43.185271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:43.185334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:43.214039  299667 cri.go:89] found id: ""
	I1205 07:47:43.214113  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.214135  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:43.214153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:43.214239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:43.239645  299667 cri.go:89] found id: ""
	I1205 07:47:43.239709  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.239730  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:43.239747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:43.239836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:43.264373  299667 cri.go:89] found id: ""
	I1205 07:47:43.264437  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.264458  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:43.264478  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:43.264514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:43.320427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:43.320464  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:43.334556  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:43.334586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:43.419578  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:43.419600  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:43.419613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.444937  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:43.444974  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:45.973125  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:45.983741  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:45.983836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:46.021150  299667 cri.go:89] found id: ""
	I1205 07:47:46.021200  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.021208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:46.021215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:46.021296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:46.046658  299667 cri.go:89] found id: ""
	I1205 07:47:46.046688  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.046725  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:46.046732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:46.046806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:46.072039  299667 cri.go:89] found id: ""
	I1205 07:47:46.072113  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.072136  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:46.072153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:46.072239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:46.117323  299667 cri.go:89] found id: ""
	I1205 07:47:46.117399  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.117423  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:46.117448  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:46.117538  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:46.154886  299667 cri.go:89] found id: ""
	I1205 07:47:46.154912  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.154921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:46.154928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:46.155012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:46.181153  299667 cri.go:89] found id: ""
	I1205 07:47:46.181199  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.181208  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:46.181215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:46.181302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:46.211244  299667 cri.go:89] found id: ""
	I1205 07:47:46.211270  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.211279  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:46.211285  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:46.211346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:46.235089  299667 cri.go:89] found id: ""
	I1205 07:47:46.235164  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.235180  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:46.235191  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:46.235203  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:46.305530  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:46.305551  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:46.305563  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:46.330757  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:46.330792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:46.376750  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:46.376781  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:46.439507  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:46.439542  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:48.953904  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:48.964561  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:48.964628  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:48.987874  299667 cri.go:89] found id: ""
	I1205 07:47:48.987900  299667 logs.go:282] 0 containers: []
	W1205 07:47:48.987909  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:48.987916  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:48.987974  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:49.014890  299667 cri.go:89] found id: ""
	I1205 07:47:49.014966  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.014980  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:49.014988  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:49.015065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:49.040290  299667 cri.go:89] found id: ""
	I1205 07:47:49.040313  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.040321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:49.040328  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:49.040385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:49.065216  299667 cri.go:89] found id: ""
	I1205 07:47:49.065278  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.065287  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:49.065293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:49.065350  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:49.091916  299667 cri.go:89] found id: ""
	I1205 07:47:49.091941  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.091950  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:49.091956  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:49.092015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:49.122078  299667 cri.go:89] found id: ""
	I1205 07:47:49.122101  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.122110  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:49.122117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:49.122174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:49.148378  299667 cri.go:89] found id: ""
	I1205 07:47:49.148400  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.148409  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:49.148415  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:49.148474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:49.181597  299667 cri.go:89] found id: ""
	I1205 07:47:49.181623  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.181639  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:49.181649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:49.181660  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:49.237429  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:49.237462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:49.252514  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:49.252540  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:49.317886  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:49.317908  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:49.317922  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:49.343471  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:49.343503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:51.885282  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:51.895713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:51.895806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:51.923558  299667 cri.go:89] found id: ""
	I1205 07:47:51.923582  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.923592  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:51.923599  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:51.923702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:51.952466  299667 cri.go:89] found id: ""
	I1205 07:47:51.952490  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.952499  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:51.952506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:51.952594  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:51.977008  299667 cri.go:89] found id: ""
	I1205 07:47:51.977032  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.977041  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:51.977048  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:51.977130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:52.001855  299667 cri.go:89] found id: ""
	I1205 07:47:52.001880  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.001890  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:52.001918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:52.002010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:52.041299  299667 cri.go:89] found id: ""
	I1205 07:47:52.041367  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.041391  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:52.041410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:52.041490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:52.066425  299667 cri.go:89] found id: ""
	I1205 07:47:52.066448  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.066457  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:52.066484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:52.066567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:52.093389  299667 cri.go:89] found id: ""
	I1205 07:47:52.093415  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.093425  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:52.093431  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:52.093490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:52.131379  299667 cri.go:89] found id: ""
	I1205 07:47:52.131404  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.131412  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:52.131421  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:52.131432  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:52.172215  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:52.172246  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:52.232285  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:52.232317  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:52.246383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:52.246461  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:52.312938  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:52.312999  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:52.313037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:54.839218  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:54.849526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:54.849596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:54.878984  299667 cri.go:89] found id: ""
	I1205 07:47:54.879018  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.879028  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:54.879034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:54.879115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:54.903570  299667 cri.go:89] found id: ""
	I1205 07:47:54.903593  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.903603  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:54.903609  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:54.903668  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:54.928679  299667 cri.go:89] found id: ""
	I1205 07:47:54.928701  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.928710  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:54.928716  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:54.928772  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:54.957443  299667 cri.go:89] found id: ""
	I1205 07:47:54.957465  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.957474  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:54.957481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:54.957539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:54.981997  299667 cri.go:89] found id: ""
	I1205 07:47:54.982022  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.982031  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:54.982037  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:54.982097  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:55.019658  299667 cri.go:89] found id: ""
	I1205 07:47:55.019684  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.019694  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:55.019702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:55.019774  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:55.045945  299667 cri.go:89] found id: ""
	I1205 07:47:55.045968  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.045977  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:55.045982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:55.046047  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:55.070660  299667 cri.go:89] found id: ""
	I1205 07:47:55.070682  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.070691  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:55.070753  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:55.070772  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:55.155877  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:55.155904  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:55.155918  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:55.182506  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:55.182538  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:55.209519  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:55.209545  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:55.268283  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:55.268315  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:57.781956  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:57.792419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:57.792511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:57.816805  299667 cri.go:89] found id: ""
	I1205 07:47:57.816830  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.816839  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:57.816845  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:57.816907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:57.844943  299667 cri.go:89] found id: ""
	I1205 07:47:57.844967  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.844975  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:57.844982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:57.845041  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:57.869698  299667 cri.go:89] found id: ""
	I1205 07:47:57.869720  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.869728  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:57.869735  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:57.869792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:57.894855  299667 cri.go:89] found id: ""
	I1205 07:47:57.894881  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.894889  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:57.894896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:57.895015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:57.919181  299667 cri.go:89] found id: ""
	I1205 07:47:57.919207  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.919217  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:57.919223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:57.919284  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:57.947523  299667 cri.go:89] found id: ""
	I1205 07:47:57.947545  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.947553  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:57.947559  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:57.947617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:57.972190  299667 cri.go:89] found id: ""
	I1205 07:47:57.972212  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.972221  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:57.972227  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:57.972337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:57.995598  299667 cri.go:89] found id: ""
	I1205 07:47:57.995620  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.995628  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:57.995637  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:57.995648  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:58.053180  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:58.053214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:58.066958  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:58.067035  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:58.148853  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:58.148871  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:58.148884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:58.177078  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:58.177111  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:00.709764  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:00.720636  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:00.720709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:00.745332  299667 cri.go:89] found id: ""
	I1205 07:48:00.745357  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.745367  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:00.745377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:00.745446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:00.769743  299667 cri.go:89] found id: ""
	I1205 07:48:00.769766  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.769774  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:00.769780  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:00.769838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:00.793723  299667 cri.go:89] found id: ""
	I1205 07:48:00.793747  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.793755  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:00.793761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:00.793849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:00.822270  299667 cri.go:89] found id: ""
	I1205 07:48:00.822295  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.822304  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:00.822311  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:00.822372  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:00.846055  299667 cri.go:89] found id: ""
	I1205 07:48:00.846079  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.846088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:00.846094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:00.846154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:00.875896  299667 cri.go:89] found id: ""
	I1205 07:48:00.875927  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.875938  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:00.875945  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:00.876005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:00.901376  299667 cri.go:89] found id: ""
	I1205 07:48:00.901401  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.901410  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:00.901417  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:00.901478  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:00.931038  299667 cri.go:89] found id: ""
	I1205 07:48:00.931062  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.931070  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:00.931080  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:00.931121  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:00.997183  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:00.997205  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:00.997217  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:01.023514  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:01.023552  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:01.051665  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:01.051694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:01.112451  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:01.112528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:03.628641  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:03.640043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:03.640115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:03.668895  299667 cri.go:89] found id: ""
	I1205 07:48:03.668923  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.668932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:03.668939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:03.669005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:03.698851  299667 cri.go:89] found id: ""
	I1205 07:48:03.698873  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.698882  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:03.698888  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:03.698946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:03.724736  299667 cri.go:89] found id: ""
	I1205 07:48:03.724758  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.724767  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:03.724773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:03.724831  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:03.751007  299667 cri.go:89] found id: ""
	I1205 07:48:03.751030  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.751038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:03.751072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:03.751143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:03.779130  299667 cri.go:89] found id: ""
	I1205 07:48:03.779153  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.779162  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:03.779168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:03.779226  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:03.808717  299667 cri.go:89] found id: ""
	I1205 07:48:03.808738  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.808798  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:03.808812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:03.808893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:03.834648  299667 cri.go:89] found id: ""
	I1205 07:48:03.834745  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.834769  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:03.834790  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:03.834894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:03.860266  299667 cri.go:89] found id: ""
	I1205 07:48:03.860290  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.860298  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:03.860307  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:03.860326  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:03.925650  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:03.925672  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:03.925684  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:03.951836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:03.951866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:03.981147  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:03.981199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:04.037271  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:04.037308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:06.551820  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:06.562850  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:06.562922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:06.588022  299667 cri.go:89] found id: ""
	I1205 07:48:06.588044  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.588052  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:06.588059  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:06.588121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:06.618654  299667 cri.go:89] found id: ""
	I1205 07:48:06.618677  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.618687  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:06.618693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:06.618760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:06.654167  299667 cri.go:89] found id: ""
	I1205 07:48:06.654188  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.654197  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:06.654203  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:06.654261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:06.681234  299667 cri.go:89] found id: ""
	I1205 07:48:06.681306  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.681327  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:06.681345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:06.681437  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:06.705922  299667 cri.go:89] found id: ""
	I1205 07:48:06.705946  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.705955  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:06.705962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:06.706044  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:06.730881  299667 cri.go:89] found id: ""
	I1205 07:48:06.730913  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.730924  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:06.730930  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:06.730987  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:06.755636  299667 cri.go:89] found id: ""
	I1205 07:48:06.755661  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.755670  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:06.755676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:06.755743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:06.780702  299667 cri.go:89] found id: ""
	I1205 07:48:06.780735  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.780743  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:06.780753  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:06.780764  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:06.841265  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:06.841303  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:06.854661  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:06.854686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:06.918298  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:06.918316  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:06.918328  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:06.943239  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:06.943274  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.471658  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:09.482526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:09.482598  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:09.507658  299667 cri.go:89] found id: ""
	I1205 07:48:09.507683  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.507692  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:09.507699  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:09.507765  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:09.538688  299667 cri.go:89] found id: ""
	I1205 07:48:09.538744  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.538758  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:09.538765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:09.538835  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:09.564016  299667 cri.go:89] found id: ""
	I1205 07:48:09.564041  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.564050  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:09.564056  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:09.564118  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:09.595020  299667 cri.go:89] found id: ""
	I1205 07:48:09.595047  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.595056  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:09.595062  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:09.595170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:09.627725  299667 cri.go:89] found id: ""
	I1205 07:48:09.627747  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.627756  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:09.627763  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:09.627821  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:09.661208  299667 cri.go:89] found id: ""
	I1205 07:48:09.661273  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.661290  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:09.661297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:09.661371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:09.686173  299667 cri.go:89] found id: ""
	I1205 07:48:09.686207  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.686216  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:09.686223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:09.686291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:09.710385  299667 cri.go:89] found id: ""
	I1205 07:48:09.710417  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.710426  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:09.710435  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:09.710447  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:09.724065  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:09.724089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:09.786352  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:09.786371  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:09.786383  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:09.814782  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:09.814823  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.845678  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:09.845705  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:12.403586  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:12.414137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:12.414208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:12.443644  299667 cri.go:89] found id: ""
	I1205 07:48:12.443666  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.443677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:12.443683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:12.443743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:12.468970  299667 cri.go:89] found id: ""
	I1205 07:48:12.468992  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.469001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:12.469007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:12.469073  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:12.495420  299667 cri.go:89] found id: ""
	I1205 07:48:12.495441  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.495449  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:12.495455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:12.495513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:12.520821  299667 cri.go:89] found id: ""
	I1205 07:48:12.520848  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.520857  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:12.520862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:12.520920  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:12.546738  299667 cri.go:89] found id: ""
	I1205 07:48:12.546767  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.546776  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:12.546782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:12.546845  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:12.571663  299667 cri.go:89] found id: ""
	I1205 07:48:12.571687  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.571696  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:12.571702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:12.571759  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:12.600237  299667 cri.go:89] found id: ""
	I1205 07:48:12.600263  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.600272  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:12.600279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:12.600336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:12.645073  299667 cri.go:89] found id: ""
	I1205 07:48:12.645108  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.645116  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:12.645126  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:12.645137  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:12.661987  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:12.662020  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:12.726418  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:12.726442  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:12.726455  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:12.751208  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:12.751243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:12.780690  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:12.780718  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:15.336959  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:15.349150  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:15.349233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:15.379055  299667 cri.go:89] found id: ""
	I1205 07:48:15.379075  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.379084  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:15.379090  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:15.379148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:15.411812  299667 cri.go:89] found id: ""
	I1205 07:48:15.411832  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.411841  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:15.411849  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:15.411907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:15.436056  299667 cri.go:89] found id: ""
	I1205 07:48:15.436077  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.436085  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:15.436091  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:15.436152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:15.461323  299667 cri.go:89] found id: ""
	I1205 07:48:15.461345  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.461354  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:15.461360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:15.461416  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:15.490552  299667 cri.go:89] found id: ""
	I1205 07:48:15.490577  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.490586  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:15.490593  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:15.490682  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:15.519448  299667 cri.go:89] found id: ""
	I1205 07:48:15.519471  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.519480  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:15.519487  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:15.519544  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:15.548923  299667 cri.go:89] found id: ""
	I1205 07:48:15.548947  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.548956  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:15.548962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:15.549024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:15.574804  299667 cri.go:89] found id: ""
	I1205 07:48:15.574828  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.574839  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:15.574847  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:15.574878  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:15.634392  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:15.634428  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:15.651971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:15.651998  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:15.719384  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:15.719407  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:15.719418  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:15.743909  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:15.743941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.273819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:18.284902  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:18.284975  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:18.310770  299667 cri.go:89] found id: ""
	I1205 07:48:18.310793  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.310802  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:18.310809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:18.310868  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:18.335509  299667 cri.go:89] found id: ""
	I1205 07:48:18.335530  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.335538  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:18.335544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:18.335602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:18.367849  299667 cri.go:89] found id: ""
	I1205 07:48:18.367875  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.367884  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:18.367890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:18.367947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:18.397008  299667 cri.go:89] found id: ""
	I1205 07:48:18.397037  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.397046  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:18.397053  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:18.397115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:18.422994  299667 cri.go:89] found id: ""
	I1205 07:48:18.423017  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.423035  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:18.423043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:18.423109  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:18.447590  299667 cri.go:89] found id: ""
	I1205 07:48:18.447666  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.447689  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:18.447713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:18.447801  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:18.472279  299667 cri.go:89] found id: ""
	I1205 07:48:18.472353  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.472375  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:18.472392  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:18.472477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:18.497432  299667 cri.go:89] found id: ""
	I1205 07:48:18.497454  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.497463  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:18.497471  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:18.497484  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:18.522163  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:18.522196  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.550354  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:18.550378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:18.605871  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:18.605944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:18.623406  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:18.623435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:18.692830  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:21.193117  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:21.203367  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:21.203430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:21.228233  299667 cri.go:89] found id: ""
	I1205 07:48:21.228257  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.228265  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:21.228272  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:21.228331  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:21.256427  299667 cri.go:89] found id: ""
	I1205 07:48:21.256448  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.256456  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:21.256462  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:21.256523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:21.281113  299667 cri.go:89] found id: ""
	I1205 07:48:21.281136  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.281145  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:21.281151  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:21.281238  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:21.305777  299667 cri.go:89] found id: ""
	I1205 07:48:21.305798  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.305806  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:21.305812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:21.305869  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:21.335558  299667 cri.go:89] found id: ""
	I1205 07:48:21.335622  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.335645  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:21.335662  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:21.335745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:21.374161  299667 cri.go:89] found id: ""
	I1205 07:48:21.374230  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.374257  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:21.374275  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:21.374358  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:21.403378  299667 cri.go:89] found id: ""
	I1205 07:48:21.403442  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.403464  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:21.403481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:21.403561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:21.428681  299667 cri.go:89] found id: ""
	I1205 07:48:21.428707  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.428717  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:21.428725  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:21.428736  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:21.485472  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:21.485503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:21.499440  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:21.499521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:21.564057  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:21.564088  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:21.564102  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:21.588591  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:21.588627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.133263  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:24.145210  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:24.145292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:24.172487  299667 cri.go:89] found id: ""
	I1205 07:48:24.172509  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.172517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:24.172523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:24.172582  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:24.197589  299667 cri.go:89] found id: ""
	I1205 07:48:24.197612  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.197634  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:24.197641  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:24.197727  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:24.232698  299667 cri.go:89] found id: ""
	I1205 07:48:24.232773  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.232803  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:24.232821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:24.232927  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:24.261831  299667 cri.go:89] found id: ""
	I1205 07:48:24.261854  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.261863  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:24.261870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:24.261932  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:24.290390  299667 cri.go:89] found id: ""
	I1205 07:48:24.290412  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.290420  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:24.290426  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:24.290486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:24.314257  299667 cri.go:89] found id: ""
	I1205 07:48:24.314327  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.314360  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:24.314383  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:24.314475  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:24.338446  299667 cri.go:89] found id: ""
	I1205 07:48:24.338469  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.338477  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:24.338484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:24.338542  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:24.366265  299667 cri.go:89] found id: ""
	I1205 07:48:24.366302  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.366314  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:24.366323  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:24.366335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:24.398722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:24.398759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.430842  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:24.430872  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:24.486913  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:24.486947  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:24.500309  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:24.500333  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:24.571107  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:27.072799  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:27.082983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:27.083049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:27.106973  299667 cri.go:89] found id: ""
	I1205 07:48:27.106997  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.107005  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:27.107012  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:27.107072  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:27.131580  299667 cri.go:89] found id: ""
	I1205 07:48:27.131604  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.131613  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:27.131619  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:27.131679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:27.156330  299667 cri.go:89] found id: ""
	I1205 07:48:27.156356  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.156364  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:27.156371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:27.156434  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:27.180350  299667 cri.go:89] found id: ""
	I1205 07:48:27.180375  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.180384  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:27.180391  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:27.180449  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:27.204756  299667 cri.go:89] found id: ""
	I1205 07:48:27.204779  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.204787  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:27.204800  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:27.204858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:27.232181  299667 cri.go:89] found id: ""
	I1205 07:48:27.232207  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.232216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:27.232223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:27.232299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:27.258059  299667 cri.go:89] found id: ""
	I1205 07:48:27.258086  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.258095  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:27.258102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:27.258165  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:27.281695  299667 cri.go:89] found id: ""
	I1205 07:48:27.281717  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.281725  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:27.281734  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:27.281746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:27.294855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:27.294880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:27.362846  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:27.362868  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:27.362880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:27.389761  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:27.389791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:27.422138  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:27.422165  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:29.980506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:29.990724  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:29.990791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:30.035211  299667 cri.go:89] found id: ""
	I1205 07:48:30.035238  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.035248  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:30.035256  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:30.035326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:30.063908  299667 cri.go:89] found id: ""
	I1205 07:48:30.063944  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.063953  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:30.063960  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:30.064034  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:30.095785  299667 cri.go:89] found id: ""
	I1205 07:48:30.095860  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.095883  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:30.095908  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:30.096002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:30.123133  299667 cri.go:89] found id: ""
	I1205 07:48:30.123156  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.123166  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:30.123172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:30.123235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:30.149862  299667 cri.go:89] found id: ""
	I1205 07:48:30.149885  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.149894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:30.149901  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:30.150013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:30.175817  299667 cri.go:89] found id: ""
	I1205 07:48:30.175883  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.175903  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:30.175920  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:30.176005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:30.201607  299667 cri.go:89] found id: ""
	I1205 07:48:30.201631  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.201640  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:30.201646  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:30.201711  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:30.227899  299667 cri.go:89] found id: ""
	I1205 07:48:30.227922  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.227931  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:30.227940  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:30.227952  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:30.241708  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:30.241742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:30.309566  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:30.309584  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:30.309597  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:30.334740  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:30.334771  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:30.378494  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:30.378524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:32.939968  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:32.950759  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:32.950832  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:32.978406  299667 cri.go:89] found id: ""
	I1205 07:48:32.978430  299667 logs.go:282] 0 containers: []
	W1205 07:48:32.978438  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:32.978454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:32.978513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:33.008532  299667 cri.go:89] found id: ""
	I1205 07:48:33.008559  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.008568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:33.008574  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:33.008650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:33.033972  299667 cri.go:89] found id: ""
	I1205 07:48:33.033997  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.034005  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:33.034013  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:33.034081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:33.059992  299667 cri.go:89] found id: ""
	I1205 07:48:33.060014  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.060023  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:33.060029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:33.060094  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:33.090354  299667 cri.go:89] found id: ""
	I1205 07:48:33.090379  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.090387  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:33.090395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:33.090454  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:33.114706  299667 cri.go:89] found id: ""
	I1205 07:48:33.114735  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.114744  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:33.114751  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:33.114809  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:33.140456  299667 cri.go:89] found id: ""
	I1205 07:48:33.140481  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.140490  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:33.140496  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:33.140557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:33.169438  299667 cri.go:89] found id: ""
	I1205 07:48:33.169461  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.169469  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:33.169478  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:33.169490  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:33.195155  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:33.195189  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:33.221590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:33.221617  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:33.277078  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:33.277110  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:33.290419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:33.290445  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:33.357621  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:35.857840  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:35.869455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:35.869525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:35.904563  299667 cri.go:89] found id: ""
	I1205 07:48:35.904585  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.904594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:35.904601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:35.904664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:35.932592  299667 cri.go:89] found id: ""
	I1205 07:48:35.932613  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.932622  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:35.932628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:35.932690  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:35.961011  299667 cri.go:89] found id: ""
	I1205 07:48:35.961033  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.961048  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:35.961055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:35.961121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:35.988109  299667 cri.go:89] found id: ""
	I1205 07:48:35.988131  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.988139  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:35.988146  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:35.988212  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:36.021866  299667 cri.go:89] found id: ""
	I1205 07:48:36.021894  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.021903  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:36.021910  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:36.021980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:36.053675  299667 cri.go:89] found id: ""
	I1205 07:48:36.053697  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.053706  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:36.053713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:36.053773  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:36.088227  299667 cri.go:89] found id: ""
	I1205 07:48:36.088252  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.088261  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:36.088268  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:36.088330  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:36.114723  299667 cri.go:89] found id: ""
	I1205 07:48:36.114753  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.114762  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:36.114772  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:36.114792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:36.130077  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:36.130105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:36.199710  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:36.199733  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:36.199746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:36.224920  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:36.224953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:36.260346  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:36.260373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:38.818746  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:38.829029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:38.829103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:38.861723  299667 cri.go:89] found id: ""
	I1205 07:48:38.861746  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.861755  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:38.861761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:38.861827  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:38.889749  299667 cri.go:89] found id: ""
	I1205 07:48:38.889772  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.889781  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:38.889787  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:38.889849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:38.925308  299667 cri.go:89] found id: ""
	I1205 07:48:38.925337  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.925346  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:38.925352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:38.925412  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:38.955710  299667 cri.go:89] found id: ""
	I1205 07:48:38.955732  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.955740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:38.955746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:38.955803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:38.980907  299667 cri.go:89] found id: ""
	I1205 07:48:38.980934  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.980943  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:38.980951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:38.981013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:39.011368  299667 cri.go:89] found id: ""
	I1205 07:48:39.011398  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.011409  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:39.011416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:39.011489  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:39.037693  299667 cri.go:89] found id: ""
	I1205 07:48:39.037719  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.037727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:39.037734  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:39.037806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:39.063915  299667 cri.go:89] found id: ""
	I1205 07:48:39.063940  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.063949  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:39.063957  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:39.063969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:39.120923  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:39.120960  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:39.134276  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:39.134302  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:39.194044  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:39.194064  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:39.194076  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:39.218536  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:39.218569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:41.747231  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:41.758180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:41.758258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:41.785400  299667 cri.go:89] found id: ""
	I1205 07:48:41.785426  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.785435  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:41.785442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:41.785509  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:41.817641  299667 cri.go:89] found id: ""
	I1205 07:48:41.817667  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.817676  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:41.817683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:41.817747  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:41.842820  299667 cri.go:89] found id: ""
	I1205 07:48:41.842846  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.842855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:41.842869  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:41.842933  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:41.880166  299667 cri.go:89] found id: ""
	I1205 07:48:41.880194  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.880208  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:41.880214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:41.880291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:41.911193  299667 cri.go:89] found id: ""
	I1205 07:48:41.911258  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.911273  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:41.911281  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:41.911337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:41.935720  299667 cri.go:89] found id: ""
	I1205 07:48:41.935745  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.935754  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:41.935761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:41.935823  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:41.962907  299667 cri.go:89] found id: ""
	I1205 07:48:41.962976  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.962992  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:41.962998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:41.963065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:41.991087  299667 cri.go:89] found id: ""
	I1205 07:48:41.991113  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.991121  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:41.991130  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:41.991140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:42.070025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:42.070073  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:42.086499  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:42.086528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:42.164053  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:42.164130  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:42.164162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:42.192298  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:42.192342  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:44.734604  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:44.745356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:44.745423  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:44.770206  299667 cri.go:89] found id: ""
	I1205 07:48:44.770230  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.770239  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:44.770247  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:44.770305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:44.796086  299667 cri.go:89] found id: ""
	I1205 07:48:44.796109  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.796118  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:44.796124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:44.796182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:44.822053  299667 cri.go:89] found id: ""
	I1205 07:48:44.822125  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.822148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:44.822167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:44.822258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:44.855227  299667 cri.go:89] found id: ""
	I1205 07:48:44.855298  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.855320  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:44.855339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:44.855422  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:44.884787  299667 cri.go:89] found id: ""
	I1205 07:48:44.884859  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.885835  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:44.885875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:44.885967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:44.922015  299667 cri.go:89] found id: ""
	I1205 07:48:44.922040  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.922048  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:44.922055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:44.922120  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:44.946942  299667 cri.go:89] found id: ""
	I1205 07:48:44.946979  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.946988  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:44.946995  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:44.947056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:44.972229  299667 cri.go:89] found id: ""
	I1205 07:48:44.972253  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.972262  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:44.972270  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:44.972280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:44.997401  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:44.997434  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:45.054576  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:45.054602  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:45.133742  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:45.133782  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:45.155399  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:45.155496  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:45.257582  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:47.759254  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:47.770034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:47.770107  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:47.799850  299667 cri.go:89] found id: ""
	I1205 07:48:47.799873  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.799882  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:47.799889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:47.799947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:47.824989  299667 cri.go:89] found id: ""
	I1205 07:48:47.825014  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.825022  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:47.825028  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:47.825089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:47.857967  299667 cri.go:89] found id: ""
	I1205 07:48:47.857993  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.858002  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:47.858008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:47.858065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:47.890800  299667 cri.go:89] found id: ""
	I1205 07:48:47.890833  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.890842  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:47.890851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:47.890911  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:47.921850  299667 cri.go:89] found id: ""
	I1205 07:48:47.921874  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.921883  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:47.921890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:47.921950  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:47.946404  299667 cri.go:89] found id: ""
	I1205 07:48:47.946426  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.946435  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:47.946442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:47.946501  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:47.972095  299667 cri.go:89] found id: ""
	I1205 07:48:47.972117  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.972125  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:47.972131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:47.972189  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:47.996555  299667 cri.go:89] found id: ""
	I1205 07:48:47.996577  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.996585  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:47.996594  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:47.996605  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:48.054087  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:48.054122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:48.069006  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:48.069038  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:48.132946  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:48.132968  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:48.132981  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:48.158949  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:48.158986  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:50.687838  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:50.698642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:50.698712  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:50.725092  299667 cri.go:89] found id: ""
	I1205 07:48:50.725113  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.725121  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:50.725128  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:50.725208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:50.750131  299667 cri.go:89] found id: ""
	I1205 07:48:50.750153  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.750161  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:50.750167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:50.750233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:50.774733  299667 cri.go:89] found id: ""
	I1205 07:48:50.774755  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.774765  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:50.774773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:50.774858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:50.803492  299667 cri.go:89] found id: ""
	I1205 07:48:50.803514  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.803524  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:50.803531  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:50.803596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:50.828915  299667 cri.go:89] found id: ""
	I1205 07:48:50.828938  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.828947  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:50.828953  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:50.829022  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:50.862065  299667 cri.go:89] found id: ""
	I1205 07:48:50.862090  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.862098  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:50.862105  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:50.862168  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:50.888327  299667 cri.go:89] found id: ""
	I1205 07:48:50.888356  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.888365  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:50.888371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:50.888432  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:50.917551  299667 cri.go:89] found id: ""
	I1205 07:48:50.917583  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.917592  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:50.917601  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:50.917613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:50.976691  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:50.976725  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:50.990259  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:50.990285  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:51.057592  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:51.057614  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:51.057628  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:51.088874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:51.088916  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.619589  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:53.630457  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:53.630521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:53.662396  299667 cri.go:89] found id: ""
	I1205 07:48:53.662420  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.662429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:53.662435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:53.662493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:53.687365  299667 cri.go:89] found id: ""
	I1205 07:48:53.687393  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.687402  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:53.687408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:53.687469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:53.711757  299667 cri.go:89] found id: ""
	I1205 07:48:53.711782  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.711791  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:53.711798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:53.711893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:53.735695  299667 cri.go:89] found id: ""
	I1205 07:48:53.735721  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.735730  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:53.735736  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:53.735793  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:53.763008  299667 cri.go:89] found id: ""
	I1205 07:48:53.763032  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.763041  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:53.763047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:53.763104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:53.791424  299667 cri.go:89] found id: ""
	I1205 07:48:53.791498  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.791520  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:53.791537  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:53.791617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:53.815855  299667 cri.go:89] found id: ""
	I1205 07:48:53.815876  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.815884  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:53.815890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:53.815946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:53.839524  299667 cri.go:89] found id: ""
	I1205 07:48:53.839548  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.839557  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:53.839565  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:53.839577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.884515  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:53.884591  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:53.947646  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:53.947682  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:53.961152  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:53.961211  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:54.031297  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:54.031321  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:54.031335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:56.557021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:56.567576  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:56.567694  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:56.596257  299667 cri.go:89] found id: ""
	I1205 07:48:56.596291  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.596300  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:56.596306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:56.596381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:56.627549  299667 cri.go:89] found id: ""
	I1205 07:48:56.627575  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.627583  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:56.627590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:56.627649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:56.661291  299667 cri.go:89] found id: ""
	I1205 07:48:56.661313  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.661321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:56.661332  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:56.661391  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:56.687435  299667 cri.go:89] found id: ""
	I1205 07:48:56.687462  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.687471  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:56.687477  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:56.687540  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:56.712238  299667 cri.go:89] found id: ""
	I1205 07:48:56.712261  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.712271  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:56.712277  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:56.712340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:56.736638  299667 cri.go:89] found id: ""
	I1205 07:48:56.736663  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.736672  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:56.736690  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:56.736748  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:56.760967  299667 cri.go:89] found id: ""
	I1205 07:48:56.761001  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.761010  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:56.761016  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:56.761075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:56.784912  299667 cri.go:89] found id: ""
	I1205 07:48:56.784939  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.784947  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:56.784958  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:56.784969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:56.808701  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:56.808734  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:56.835856  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:56.835884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:56.896082  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:56.896154  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:56.914235  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:56.914310  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:56.981742  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.483411  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:59.494080  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:59.494149  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:59.521983  299667 cri.go:89] found id: ""
	I1205 07:48:59.522007  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.522015  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:59.522023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:59.522081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:59.547605  299667 cri.go:89] found id: ""
	I1205 07:48:59.547637  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.547646  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:59.547652  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:59.547718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:59.572816  299667 cri.go:89] found id: ""
	I1205 07:48:59.572839  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.572847  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:59.572854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:59.572909  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:59.598049  299667 cri.go:89] found id: ""
	I1205 07:48:59.598070  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.598078  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:59.598085  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:59.598145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:59.624907  299667 cri.go:89] found id: ""
	I1205 07:48:59.624928  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.624937  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:59.624943  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:59.625001  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:59.651926  299667 cri.go:89] found id: ""
	I1205 07:48:59.651947  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.651955  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:59.651962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:59.652019  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:59.680003  299667 cri.go:89] found id: ""
	I1205 07:48:59.680080  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.680103  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:59.680120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:59.680228  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:59.705437  299667 cri.go:89] found id: ""
	I1205 07:48:59.705465  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.705474  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:59.705483  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:59.705493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:59.763111  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:59.763142  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:59.777300  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:59.777368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:59.842575  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.842643  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:59.842663  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:59.869833  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:59.869908  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:02.402084  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:02.412782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:02.412851  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:02.438256  299667 cri.go:89] found id: ""
	I1205 07:49:02.438279  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.438287  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:02.438294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:02.438352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:02.465899  299667 cri.go:89] found id: ""
	I1205 07:49:02.465926  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.465935  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:02.465942  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:02.466005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:02.490481  299667 cri.go:89] found id: ""
	I1205 07:49:02.490503  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.490513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:02.490519  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:02.490586  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:02.516169  299667 cri.go:89] found id: ""
	I1205 07:49:02.516196  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.516205  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:02.516211  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:02.516271  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:02.541403  299667 cri.go:89] found id: ""
	I1205 07:49:02.541429  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.541439  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:02.541445  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:02.541507  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:02.566995  299667 cri.go:89] found id: ""
	I1205 07:49:02.567017  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.567025  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:02.567032  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:02.567099  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:02.597621  299667 cri.go:89] found id: ""
	I1205 07:49:02.597644  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.597652  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:02.597657  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:02.597716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:02.628924  299667 cri.go:89] found id: ""
	I1205 07:49:02.628951  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.628960  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:02.628969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:02.628980  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:02.693315  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:02.693348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:02.707066  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:02.707162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:02.771707  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:02.771729  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:02.771742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:02.797113  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:02.797145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:05.326530  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:05.336990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:05.337057  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:05.360427  299667 cri.go:89] found id: ""
	I1205 07:49:05.360451  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.360460  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:05.360466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:05.360525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:05.384196  299667 cri.go:89] found id: ""
	I1205 07:49:05.384222  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.384230  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:05.384237  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:05.384299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:05.410321  299667 cri.go:89] found id: ""
	I1205 07:49:05.410344  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.410352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:05.410358  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:05.410417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:05.433726  299667 cri.go:89] found id: ""
	I1205 07:49:05.433793  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.433815  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:05.433833  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:05.433921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:05.458853  299667 cri.go:89] found id: ""
	I1205 07:49:05.458924  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.458940  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:05.458947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:05.459008  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:05.482445  299667 cri.go:89] found id: ""
	I1205 07:49:05.482514  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.482529  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:05.482538  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:05.482610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:05.507192  299667 cri.go:89] found id: ""
	I1205 07:49:05.507260  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.507282  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:05.507300  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:05.507393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:05.532405  299667 cri.go:89] found id: ""
	I1205 07:49:05.532439  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.532448  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:05.532459  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:05.532470  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:05.587713  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:05.587744  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:05.600994  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:05.601062  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:05.676675  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:05.676745  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:05.676770  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:05.700917  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:05.700948  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.230743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:08.241254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:08.241324  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:08.265687  299667 cri.go:89] found id: ""
	I1205 07:49:08.265765  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.265781  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:08.265789  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:08.265873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:08.291182  299667 cri.go:89] found id: ""
	I1205 07:49:08.291212  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.291222  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:08.291230  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:08.291288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:08.316404  299667 cri.go:89] found id: ""
	I1205 07:49:08.316431  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.316439  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:08.316446  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:08.316503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:08.342004  299667 cri.go:89] found id: ""
	I1205 07:49:08.342030  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.342038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:08.342044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:08.342103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:08.370679  299667 cri.go:89] found id: ""
	I1205 07:49:08.370700  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.370708  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:08.370715  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:08.370791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:08.398788  299667 cri.go:89] found id: ""
	I1205 07:49:08.398848  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.398880  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:08.398896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:08.398967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:08.427499  299667 cri.go:89] found id: ""
	I1205 07:49:08.427532  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.427552  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:08.427560  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:08.427627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:08.455982  299667 cri.go:89] found id: ""
	I1205 07:49:08.456008  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.456016  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:08.456025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:08.456037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:08.469660  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:08.469687  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:08.534660  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:08.534684  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:08.534697  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:08.560195  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:08.560228  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.590035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:08.590061  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:11.150392  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:11.161108  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:11.161194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:11.185243  299667 cri.go:89] found id: ""
	I1205 07:49:11.185264  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.185273  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:11.185280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:11.185338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:11.208758  299667 cri.go:89] found id: ""
	I1205 07:49:11.208797  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.208806  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:11.208815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:11.208884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:11.235054  299667 cri.go:89] found id: ""
	I1205 07:49:11.235077  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.235086  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:11.235092  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:11.235157  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:11.259045  299667 cri.go:89] found id: ""
	I1205 07:49:11.259068  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.259076  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:11.259082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:11.259143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:11.288257  299667 cri.go:89] found id: ""
	I1205 07:49:11.288282  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.288291  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:11.288298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:11.288354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:11.312884  299667 cri.go:89] found id: ""
	I1205 07:49:11.312906  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.312914  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:11.312922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:11.312978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:11.341317  299667 cri.go:89] found id: ""
	I1205 07:49:11.341340  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.341348  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:11.341354  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:11.341411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:11.365207  299667 cri.go:89] found id: ""
	I1205 07:49:11.365234  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.365243  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:11.365260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:11.365271  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:11.423587  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:11.423619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:11.437723  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:11.437796  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:11.504822  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:11.504896  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:11.504935  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:11.529753  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:11.529791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:14.059148  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:14.069586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:14.069676  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:14.103804  299667 cri.go:89] found id: ""
	I1205 07:49:14.103828  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.103837  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:14.103843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:14.103901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:14.135010  299667 cri.go:89] found id: ""
	I1205 07:49:14.135031  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.135040  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:14.135045  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:14.135104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:14.170829  299667 cri.go:89] found id: ""
	I1205 07:49:14.170851  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.170859  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:14.170865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:14.170926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:14.199693  299667 cri.go:89] found id: ""
	I1205 07:49:14.199715  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.199724  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:14.199730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:14.199789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:14.223902  299667 cri.go:89] found id: ""
	I1205 07:49:14.223924  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.223931  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:14.223937  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:14.224003  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:14.247854  299667 cri.go:89] found id: ""
	I1205 07:49:14.247926  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.247950  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:14.247969  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:14.248063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:14.272146  299667 cri.go:89] found id: ""
	I1205 07:49:14.272219  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.272250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:14.272270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:14.272375  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:14.297307  299667 cri.go:89] found id: ""
	I1205 07:49:14.297377  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.297404  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:14.297421  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:14.297436  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:14.352148  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:14.352181  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:14.365391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:14.365420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:14.429045  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:14.429068  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:14.429080  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:14.453460  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:14.453494  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:16.984086  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:16.994499  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:16.994567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:17.022900  299667 cri.go:89] found id: ""
	I1205 07:49:17.022923  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.022932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:17.022939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:17.022997  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:17.047244  299667 cri.go:89] found id: ""
	I1205 07:49:17.047318  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.047332  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:17.047339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:17.047415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:17.070683  299667 cri.go:89] found id: ""
	I1205 07:49:17.070716  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.070725  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:17.070732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:17.070811  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:17.104238  299667 cri.go:89] found id: ""
	I1205 07:49:17.104310  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.104332  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:17.104351  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:17.104433  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:17.130787  299667 cri.go:89] found id: ""
	I1205 07:49:17.130867  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.130890  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:17.130907  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:17.131014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:17.159177  299667 cri.go:89] found id: ""
	I1205 07:49:17.159212  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.159221  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:17.159228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:17.159293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:17.187127  299667 cri.go:89] found id: ""
	I1205 07:49:17.187148  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.187157  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:17.187168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:17.187225  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:17.214608  299667 cri.go:89] found id: ""
	I1205 07:49:17.214633  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.214641  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:17.214650  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:17.214690  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:17.227937  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:17.227964  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:17.290517  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:17.290581  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:17.290600  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:17.315039  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:17.315074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:17.343285  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:17.343348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:19.899406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:19.910597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:19.910679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:19.935640  299667 cri.go:89] found id: ""
	I1205 07:49:19.935664  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.935673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:19.935679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:19.935736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:19.959309  299667 cri.go:89] found id: ""
	I1205 07:49:19.959336  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.959345  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:19.959352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:19.959418  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:19.982862  299667 cri.go:89] found id: ""
	I1205 07:49:19.982884  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.982893  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:19.982899  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:19.982957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:20.016784  299667 cri.go:89] found id: ""
	I1205 07:49:20.016810  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.016819  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:20.016826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:20.016893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:20.044555  299667 cri.go:89] found id: ""
	I1205 07:49:20.044580  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.044590  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:20.044597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:20.044657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:20.080570  299667 cri.go:89] found id: ""
	I1205 07:49:20.080595  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.080603  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:20.080610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:20.080689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:20.112802  299667 cri.go:89] found id: ""
	I1205 07:49:20.112829  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.112838  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:20.112852  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:20.112912  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:20.145614  299667 cri.go:89] found id: ""
	I1205 07:49:20.145642  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.145650  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:20.145659  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:20.145670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:20.208200  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:20.208233  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:20.222391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:20.222422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:20.285471  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:20.285500  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:20.285513  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:20.311384  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:20.311415  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:22.840933  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:22.854843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:22.854939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:22.881572  299667 cri.go:89] found id: ""
	I1205 07:49:22.881598  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.881608  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:22.881614  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:22.881677  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:22.917647  299667 cri.go:89] found id: ""
	I1205 07:49:22.917677  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.917686  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:22.917692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:22.917750  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:22.943325  299667 cri.go:89] found id: ""
	I1205 07:49:22.943346  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.943355  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:22.943362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:22.943426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:22.967894  299667 cri.go:89] found id: ""
	I1205 07:49:22.967955  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.967979  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:22.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:22.968076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:22.994911  299667 cri.go:89] found id: ""
	I1205 07:49:22.994976  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.994991  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:22.994998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:22.995056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:23.022399  299667 cri.go:89] found id: ""
	I1205 07:49:23.022464  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.022486  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:23.022506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:23.022581  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:23.048262  299667 cri.go:89] found id: ""
	I1205 07:49:23.048283  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.048291  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:23.048297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:23.048355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:23.072655  299667 cri.go:89] found id: ""
	I1205 07:49:23.072684  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.072694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:23.072702  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:23.072720  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:23.132711  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:23.132742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:23.146553  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:23.146576  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:23.218207  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:23.218230  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:23.218243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:23.242426  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:23.242462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:25.772926  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:25.783467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:25.783546  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:25.811044  299667 cri.go:89] found id: ""
	I1205 07:49:25.811066  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.811075  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:25.811081  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:25.811139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:25.835534  299667 cri.go:89] found id: ""
	I1205 07:49:25.835558  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.835568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:25.835575  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:25.835637  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:25.866938  299667 cri.go:89] found id: ""
	I1205 07:49:25.866966  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.866974  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:25.866981  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:25.867043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:25.897273  299667 cri.go:89] found id: ""
	I1205 07:49:25.897302  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.897313  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:25.897320  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:25.897380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:25.923461  299667 cri.go:89] found id: ""
	I1205 07:49:25.923489  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.923497  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:25.923504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:25.923590  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:25.946791  299667 cri.go:89] found id: ""
	I1205 07:49:25.946813  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.946822  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:25.946828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:25.946885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:25.971479  299667 cri.go:89] found id: ""
	I1205 07:49:25.971507  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.971515  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:25.971521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:25.971580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:25.994965  299667 cri.go:89] found id: ""
	I1205 07:49:25.994986  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.994994  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:25.995003  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:25.995014  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:26.058667  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:26.058701  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:26.073089  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:26.073119  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:26.150334  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:26.150355  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:26.150367  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:26.182077  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:26.182109  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:28.710700  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:28.722142  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:28.722208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:28.749003  299667 cri.go:89] found id: ""
	I1205 07:49:28.749029  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.749037  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:28.749044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:28.749101  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:28.774112  299667 cri.go:89] found id: ""
	I1205 07:49:28.774141  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.774152  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:28.774158  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:28.774215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:28.797966  299667 cri.go:89] found id: ""
	I1205 07:49:28.797987  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.797996  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:28.798002  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:28.798058  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:28.825668  299667 cri.go:89] found id: ""
	I1205 07:49:28.825694  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.825703  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:28.825709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:28.825788  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:28.856952  299667 cri.go:89] found id: ""
	I1205 07:49:28.856986  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.857001  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:28.857008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:28.857091  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:28.882695  299667 cri.go:89] found id: ""
	I1205 07:49:28.882730  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.882746  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:28.882753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:28.882822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:28.909550  299667 cri.go:89] found id: ""
	I1205 07:49:28.909584  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.909594  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:28.909601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:28.909671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:28.942251  299667 cri.go:89] found id: ""
	I1205 07:49:28.942319  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.942340  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:28.942362  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:28.942387  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:29.005506  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:29.005539  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:29.005554  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:29.030880  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:29.030910  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:29.058353  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:29.058381  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:29.121228  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:29.121304  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:31.636506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:31.647234  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:31.647305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:31.672508  299667 cri.go:89] found id: ""
	I1205 07:49:31.672530  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.672539  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:31.672545  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:31.672603  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:31.696860  299667 cri.go:89] found id: ""
	I1205 07:49:31.696885  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.696894  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:31.696900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:31.696970  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:31.722649  299667 cri.go:89] found id: ""
	I1205 07:49:31.722676  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.722685  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:31.722692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:31.722770  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:31.748068  299667 cri.go:89] found id: ""
	I1205 07:49:31.748093  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.748101  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:31.748109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:31.748169  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:31.773290  299667 cri.go:89] found id: ""
	I1205 07:49:31.773315  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.773324  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:31.773330  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:31.773393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:31.804425  299667 cri.go:89] found id: ""
	I1205 07:49:31.804445  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.804454  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:31.804461  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:31.804521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:31.829116  299667 cri.go:89] found id: ""
	I1205 07:49:31.829137  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.829146  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:31.829152  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:31.829241  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:31.867330  299667 cri.go:89] found id: ""
	I1205 07:49:31.867406  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.867418  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:31.867427  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:31.867438  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:31.931647  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:31.931680  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:31.945211  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:31.945236  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:32.004694  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:32.004719  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:32.004738  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:32.031538  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:32.031572  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:34.562576  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:34.573366  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:34.573477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:34.599238  299667 cri.go:89] found id: ""
	I1205 07:49:34.599262  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.599272  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:34.599279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:34.599342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:34.624561  299667 cri.go:89] found id: ""
	I1205 07:49:34.624589  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.624598  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:34.624604  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:34.624666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:34.649603  299667 cri.go:89] found id: ""
	I1205 07:49:34.649624  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.649637  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:34.649644  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:34.649707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:34.674019  299667 cri.go:89] found id: ""
	I1205 07:49:34.674043  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.674052  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:34.674058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:34.674121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:34.700890  299667 cri.go:89] found id: ""
	I1205 07:49:34.700912  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.700921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:34.700928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:34.700988  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:34.727454  299667 cri.go:89] found id: ""
	I1205 07:49:34.727482  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.727491  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:34.727498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:34.727558  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:34.753086  299667 cri.go:89] found id: ""
	I1205 07:49:34.753107  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.753115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:34.753120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:34.753208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:34.779077  299667 cri.go:89] found id: ""
	I1205 07:49:34.779100  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.779109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:34.779118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:34.779129  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:34.839330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:34.839368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:34.857129  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:34.857175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:34.932420  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:34.932440  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:34.932452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:34.957616  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:34.957649  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:37.486529  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:37.496909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:37.496977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:37.521254  299667 cri.go:89] found id: ""
	I1205 07:49:37.521315  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.521349  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:37.521372  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:37.521462  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:37.544759  299667 cri.go:89] found id: ""
	I1205 07:49:37.544782  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.544791  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:37.544798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:37.544854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:37.569519  299667 cri.go:89] found id: ""
	I1205 07:49:37.569549  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.569558  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:37.569564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:37.569624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:37.593917  299667 cri.go:89] found id: ""
	I1205 07:49:37.593938  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.593947  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:37.593954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:37.594014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:37.619915  299667 cri.go:89] found id: ""
	I1205 07:49:37.619940  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.619949  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:37.619955  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:37.620016  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:37.647160  299667 cri.go:89] found id: ""
	I1205 07:49:37.647186  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.647195  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:37.647202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:37.647261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:37.672076  299667 cri.go:89] found id: ""
	I1205 07:49:37.672097  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.672105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:37.672111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:37.672170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:37.697550  299667 cri.go:89] found id: ""
	I1205 07:49:37.697573  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.697581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:37.697590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:37.697601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:37.754073  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:37.754105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:37.769043  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:37.769071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:37.831338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:37.831359  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:37.831371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:37.857528  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:37.857564  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:40.404513  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:40.415071  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:40.415143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:40.439261  299667 cri.go:89] found id: ""
	I1205 07:49:40.439283  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.439291  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:40.439298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:40.439355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:40.464063  299667 cri.go:89] found id: ""
	I1205 07:49:40.464084  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.464092  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:40.464098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:40.464158  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:40.490322  299667 cri.go:89] found id: ""
	I1205 07:49:40.490344  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.490352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:40.490359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:40.490419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:40.517055  299667 cri.go:89] found id: ""
	I1205 07:49:40.517078  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.517087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:40.517093  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:40.517151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:40.545250  299667 cri.go:89] found id: ""
	I1205 07:49:40.545273  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.545282  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:40.545288  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:40.545348  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:40.569118  299667 cri.go:89] found id: ""
	I1205 07:49:40.569142  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.569151  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:40.569188  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:40.569248  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:40.593152  299667 cri.go:89] found id: ""
	I1205 07:49:40.593209  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.593217  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:40.593223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:40.593287  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:40.617285  299667 cri.go:89] found id: ""
	I1205 07:49:40.617308  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.617316  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:40.617325  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:40.617336  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:40.681518  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:40.681540  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:40.681553  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:40.707309  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:40.707347  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:40.740118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:40.740145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:40.798971  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:40.799001  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.313313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:43.324257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:43.324337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:43.356730  299667 cri.go:89] found id: ""
	I1205 07:49:43.356755  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.356763  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:43.356770  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:43.356828  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:43.386071  299667 cri.go:89] found id: ""
	I1205 07:49:43.386097  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.386106  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:43.386112  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:43.386172  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:43.415579  299667 cri.go:89] found id: ""
	I1205 07:49:43.415606  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.415615  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:43.415621  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:43.415679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:43.441039  299667 cri.go:89] found id: ""
	I1205 07:49:43.441064  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.441075  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:43.441082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:43.441141  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:43.466399  299667 cri.go:89] found id: ""
	I1205 07:49:43.466432  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.466442  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:43.466449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:43.466519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:43.497264  299667 cri.go:89] found id: ""
	I1205 07:49:43.497309  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.497319  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:43.497326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:43.497397  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:43.522221  299667 cri.go:89] found id: ""
	I1205 07:49:43.522247  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.522256  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:43.522262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:43.522325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:43.546887  299667 cri.go:89] found id: ""
	I1205 07:49:43.546953  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.546969  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:43.546980  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:43.546992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:43.613596  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:43.613644  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.628794  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:43.628825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:43.698835  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:43.698854  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:43.698866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:43.725776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:43.725811  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:46.256365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:46.267583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:46.267659  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:46.296652  299667 cri.go:89] found id: ""
	I1205 07:49:46.296679  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.296687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:46.296694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:46.296760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:46.323489  299667 cri.go:89] found id: ""
	I1205 07:49:46.323514  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.323522  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:46.323529  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:46.323593  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:46.355225  299667 cri.go:89] found id: ""
	I1205 07:49:46.355249  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.355258  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:46.355265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:46.355340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:46.383644  299667 cri.go:89] found id: ""
	I1205 07:49:46.383678  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.383687  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:46.383694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:46.383768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:46.421484  299667 cri.go:89] found id: ""
	I1205 07:49:46.421518  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.421527  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:46.421533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:46.421602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:46.447032  299667 cri.go:89] found id: ""
	I1205 07:49:46.447057  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.447066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:46.447073  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:46.447136  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:46.472839  299667 cri.go:89] found id: ""
	I1205 07:49:46.472860  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.472867  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:46.472873  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:46.472930  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:46.501395  299667 cri.go:89] found id: ""
	I1205 07:49:46.501422  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.501432  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:46.501441  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:46.501452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:46.558146  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:46.558178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:46.573118  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:46.573146  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:46.637720  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:46.637741  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:46.637754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:46.662623  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:46.662658  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.193341  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:49.204485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:49.204616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:49.235316  299667 cri.go:89] found id: ""
	I1205 07:49:49.235380  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.235403  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:49.235424  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:49.235503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:49.259781  299667 cri.go:89] found id: ""
	I1205 07:49:49.259811  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.259820  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:49.259826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:49.259894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:49.283985  299667 cri.go:89] found id: ""
	I1205 07:49:49.284025  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.284034  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:49.284041  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:49.284123  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:49.312614  299667 cri.go:89] found id: ""
	I1205 07:49:49.312643  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.312652  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:49.312659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:49.312728  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:49.338339  299667 cri.go:89] found id: ""
	I1205 07:49:49.338362  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.338371  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:49.338378  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:49.338444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:49.367532  299667 cri.go:89] found id: ""
	I1205 07:49:49.367557  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.367565  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:49.367572  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:49.367635  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:49.401925  299667 cri.go:89] found id: ""
	I1205 07:49:49.402000  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.402020  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:49.402038  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:49.402122  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:49.428942  299667 cri.go:89] found id: ""
	I1205 07:49:49.428975  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.428993  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:49.429003  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:49.429021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:49.492403  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:49.492426  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:49.492439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:49.517991  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:49.518021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.545729  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:49.545754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:49.601110  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:49.601140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:52.115102  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:52.128449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:52.128522  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:52.158550  299667 cri.go:89] found id: ""
	I1205 07:49:52.158575  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.158584  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:52.158591  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:52.158654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:52.183729  299667 cri.go:89] found id: ""
	I1205 07:49:52.183750  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.183759  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:52.183765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:52.183829  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:52.209241  299667 cri.go:89] found id: ""
	I1205 07:49:52.209269  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.209279  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:52.209286  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:52.209367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:52.234457  299667 cri.go:89] found id: ""
	I1205 07:49:52.234488  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.234497  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:52.234504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:52.234568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:52.258774  299667 cri.go:89] found id: ""
	I1205 07:49:52.258799  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.258808  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:52.258815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:52.258904  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:52.284285  299667 cri.go:89] found id: ""
	I1205 07:49:52.284319  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.284329  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:52.284336  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:52.284406  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:52.311443  299667 cri.go:89] found id: ""
	I1205 07:49:52.311470  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.311479  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:52.311485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:52.311577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:52.335827  299667 cri.go:89] found id: ""
	I1205 07:49:52.335859  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.335868  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:52.335879  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:52.335890  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:52.395851  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:52.395889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:52.410419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:52.410446  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:52.478966  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:52.478997  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:52.479010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:52.504082  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:52.504114  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.031406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:55.042458  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:55.042534  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:55.066642  299667 cri.go:89] found id: ""
	I1205 07:49:55.066667  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.066677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:55.066684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:55.066746  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:55.091150  299667 cri.go:89] found id: ""
	I1205 07:49:55.091180  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.091189  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:55.091195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:55.091255  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:55.121930  299667 cri.go:89] found id: ""
	I1205 07:49:55.121951  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.121960  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:55.121965  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:55.122023  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:55.149981  299667 cri.go:89] found id: ""
	I1205 07:49:55.150058  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.150079  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:55.150097  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:55.150184  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:55.173681  299667 cri.go:89] found id: ""
	I1205 07:49:55.173704  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.173712  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:55.173718  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:55.173777  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:55.197308  299667 cri.go:89] found id: ""
	I1205 07:49:55.197332  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.197341  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:55.197347  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:55.197403  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:55.223472  299667 cri.go:89] found id: ""
	I1205 07:49:55.223493  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.223502  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:55.223508  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:55.223572  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:55.252432  299667 cri.go:89] found id: ""
	I1205 07:49:55.252457  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.252466  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:55.252474  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:55.252487  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:55.318488  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:55.318520  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:55.318533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:55.343511  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:55.343587  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.386735  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:55.386818  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:55.452457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:55.452497  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:57.966172  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:57.976919  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:57.976991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:58.003394  299667 cri.go:89] found id: ""
	I1205 07:49:58.003420  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.003429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:58.003436  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:58.003505  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:58.040382  299667 cri.go:89] found id: ""
	I1205 07:49:58.040403  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.040411  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:58.040425  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:58.040486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:58.066131  299667 cri.go:89] found id: ""
	I1205 07:49:58.066161  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.066170  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:58.066177  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:58.066236  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:58.092126  299667 cri.go:89] found id: ""
	I1205 07:49:58.092149  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.092157  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:58.092164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:58.092224  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:58.123111  299667 cri.go:89] found id: ""
	I1205 07:49:58.123138  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.123147  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:58.123154  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:58.123215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:58.155898  299667 cri.go:89] found id: ""
	I1205 07:49:58.155920  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.155929  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:58.155936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:58.156002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:58.181658  299667 cri.go:89] found id: ""
	I1205 07:49:58.181684  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.181694  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:58.181700  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:58.181760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:58.211071  299667 cri.go:89] found id: ""
	I1205 07:49:58.211093  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.211102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:58.211111  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:58.211122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:58.271505  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:58.271551  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:58.287071  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:58.287097  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:58.357627  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:58.357680  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:58.357694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:58.388703  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:58.388747  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:00.928058  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:00.939115  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:00.939186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:00.967955  299667 cri.go:89] found id: ""
	I1205 07:50:00.967979  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.967989  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:00.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:00.968054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:00.994981  299667 cri.go:89] found id: ""
	I1205 07:50:00.995006  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.995014  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:00.995022  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:00.995081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:01.020388  299667 cri.go:89] found id: ""
	I1205 07:50:01.020412  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.020421  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:01.020427  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:01.020487  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:01.045771  299667 cri.go:89] found id: ""
	I1205 07:50:01.045796  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.045816  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:01.045839  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:01.045915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:01.072970  299667 cri.go:89] found id: ""
	I1205 07:50:01.072995  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.073004  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:01.073009  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:01.073069  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:01.110343  299667 cri.go:89] found id: ""
	I1205 07:50:01.110365  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.110374  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:01.110382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:01.110442  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:01.143588  299667 cri.go:89] found id: ""
	I1205 07:50:01.143627  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.143669  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:01.143676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:01.143734  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:01.173718  299667 cri.go:89] found id: ""
	I1205 07:50:01.173744  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.173753  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:01.173762  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:01.173775  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:01.240437  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:01.240461  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:01.240475  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:01.265849  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:01.265884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:01.295649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:01.295676  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:01.352457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:01.352493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:03.872935  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:03.884137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:03.884213  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:03.909107  299667 cri.go:89] found id: ""
	I1205 07:50:03.909129  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.909138  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:03.909144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:03.909231  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:03.935188  299667 cri.go:89] found id: ""
	I1205 07:50:03.935217  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.935229  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:03.935235  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:03.935293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:03.960991  299667 cri.go:89] found id: ""
	I1205 07:50:03.961013  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.961023  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:03.961029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:03.961087  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:03.993563  299667 cri.go:89] found id: ""
	I1205 07:50:03.993586  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.993595  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:03.993602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:03.993658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:04.022615  299667 cri.go:89] found id: ""
	I1205 07:50:04.022640  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.022650  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:04.022656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:04.022744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:04.052044  299667 cri.go:89] found id: ""
	I1205 07:50:04.052067  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.052076  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:04.052083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:04.052155  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:04.077688  299667 cri.go:89] found id: ""
	I1205 07:50:04.077766  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.077790  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:04.077798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:04.077873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:04.108745  299667 cri.go:89] found id: ""
	I1205 07:50:04.108772  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.108781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:04.108790  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:04.108806  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:04.124370  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:04.124398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:04.202708  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:04.202730  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:04.202742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:04.228486  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:04.228522  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:04.257187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:04.257214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:06.817489  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:06.828313  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:06.828385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:06.852373  299667 cri.go:89] found id: ""
	I1205 07:50:06.852445  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.852468  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:06.852489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:06.852557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:06.877263  299667 cri.go:89] found id: ""
	I1205 07:50:06.877291  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.877300  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:06.877306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:06.877373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:06.902856  299667 cri.go:89] found id: ""
	I1205 07:50:06.902882  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.902892  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:06.902898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:06.902962  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:06.928569  299667 cri.go:89] found id: ""
	I1205 07:50:06.928595  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.928604  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:06.928611  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:06.928689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:06.953448  299667 cri.go:89] found id: ""
	I1205 07:50:06.953481  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.953491  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:06.953498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:06.953567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:06.978486  299667 cri.go:89] found id: ""
	I1205 07:50:06.978557  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.978579  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:06.978592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:06.978653  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:07.004116  299667 cri.go:89] found id: ""
	I1205 07:50:07.004201  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.004245  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:07.004278  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:07.004369  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:07.030912  299667 cri.go:89] found id: ""
	I1205 07:50:07.030946  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.030956  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:07.030966  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:07.030995  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:07.087669  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:07.087703  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:07.102364  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:07.102424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:07.175733  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:07.175756  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:07.175768  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:07.201087  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:07.201120  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.733660  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:09.744254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:09.744322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:09.768703  299667 cri.go:89] found id: ""
	I1205 07:50:09.768725  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.768733  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:09.768740  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:09.768803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:09.792862  299667 cri.go:89] found id: ""
	I1205 07:50:09.792884  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.792892  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:09.792898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:09.792953  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:09.816998  299667 cri.go:89] found id: ""
	I1205 07:50:09.817020  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.817028  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:09.817042  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:09.817098  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:09.846103  299667 cri.go:89] found id: ""
	I1205 07:50:09.846128  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.846137  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:09.846144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:09.846215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:09.869920  299667 cri.go:89] found id: ""
	I1205 07:50:09.869943  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.869952  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:09.869958  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:09.870017  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:09.894186  299667 cri.go:89] found id: ""
	I1205 07:50:09.894207  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.894216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:09.894222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:09.894279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:09.918290  299667 cri.go:89] found id: ""
	I1205 07:50:09.918323  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.918332  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:09.918338  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:09.918404  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:09.942213  299667 cri.go:89] found id: ""
	I1205 07:50:09.942241  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.942250  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:09.942260  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:09.942300  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.971801  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:09.971827  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:10.027693  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:10.027732  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:10.042067  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:10.042095  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:10.106137  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:10.106162  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:10.106175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.633673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:12.645469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:12.645547  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:12.676971  299667 cri.go:89] found id: ""
	I1205 07:50:12.676997  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.677007  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:12.677014  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:12.677084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:12.702338  299667 cri.go:89] found id: ""
	I1205 07:50:12.702361  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.702370  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:12.702377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:12.702436  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:12.726932  299667 cri.go:89] found id: ""
	I1205 07:50:12.726958  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.726968  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:12.726974  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:12.727054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:12.752194  299667 cri.go:89] found id: ""
	I1205 07:50:12.752231  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.752240  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:12.752246  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:12.752354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:12.777805  299667 cri.go:89] found id: ""
	I1205 07:50:12.777874  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.777897  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:12.777917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:12.777990  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:12.802215  299667 cri.go:89] found id: ""
	I1205 07:50:12.802240  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.802250  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:12.802257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:12.802334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:12.831796  299667 cri.go:89] found id: ""
	I1205 07:50:12.831821  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.831830  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:12.831836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:12.831899  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:12.856886  299667 cri.go:89] found id: ""
	I1205 07:50:12.856912  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.856921  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:12.856930  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:12.856941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:12.870323  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:12.870352  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:12.933303  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:12.933325  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:12.933339  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.958156  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:12.958191  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:12.986132  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:12.986158  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:15.543265  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:15.553756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:15.553824  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:15.579618  299667 cri.go:89] found id: ""
	I1205 07:50:15.579641  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.579650  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:15.579656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:15.579719  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:15.615622  299667 cri.go:89] found id: ""
	I1205 07:50:15.615646  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.615654  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:15.615660  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:15.615718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:15.648566  299667 cri.go:89] found id: ""
	I1205 07:50:15.648595  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.648604  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:15.648610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:15.648669  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:15.678106  299667 cri.go:89] found id: ""
	I1205 07:50:15.678132  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.678141  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:15.678147  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:15.678210  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:15.703125  299667 cri.go:89] found id: ""
	I1205 07:50:15.703148  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.703157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:15.703163  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:15.703229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:15.727847  299667 cri.go:89] found id: ""
	I1205 07:50:15.727873  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.727882  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:15.727889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:15.727948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:15.755105  299667 cri.go:89] found id: ""
	I1205 07:50:15.755129  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.755138  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:15.755144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:15.755203  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:15.780309  299667 cri.go:89] found id: ""
	I1205 07:50:15.780334  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.780343  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:15.780351  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:15.780362  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:15.836755  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:15.836788  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:15.850164  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:15.850241  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:15.913792  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:15.913812  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:15.913828  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:15.938310  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:15.938344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.465299  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:18.475870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:18.475939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:18.501780  299667 cri.go:89] found id: ""
	I1205 07:50:18.501806  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.501821  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:18.501828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:18.501886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:18.526890  299667 cri.go:89] found id: ""
	I1205 07:50:18.526920  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.526929  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:18.526936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:18.526996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:18.552506  299667 cri.go:89] found id: ""
	I1205 07:50:18.552531  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.552540  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:18.552546  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:18.552605  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:18.577492  299667 cri.go:89] found id: ""
	I1205 07:50:18.577517  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.577526  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:18.577533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:18.577591  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:18.609705  299667 cri.go:89] found id: ""
	I1205 07:50:18.609731  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.609740  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:18.609746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:18.609804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:18.637216  299667 cri.go:89] found id: ""
	I1205 07:50:18.637242  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.637251  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:18.637258  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:18.637315  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:18.663025  299667 cri.go:89] found id: ""
	I1205 07:50:18.663051  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.663060  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:18.663067  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:18.663145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:18.689022  299667 cri.go:89] found id: ""
	I1205 07:50:18.689086  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.689109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:18.689131  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:18.689192  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:18.703250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:18.703279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:18.768192  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:18.768211  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:18.768223  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:18.793554  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:18.793585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.828893  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:18.828920  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:21.385309  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:21.397376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:21.397451  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:21.424618  299667 cri.go:89] found id: ""
	I1205 07:50:21.424642  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.424652  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:21.424659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:21.424717  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:21.451181  299667 cri.go:89] found id: ""
	I1205 07:50:21.451202  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.451211  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:21.451217  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:21.451275  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:21.475206  299667 cri.go:89] found id: ""
	I1205 07:50:21.475228  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.475237  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:21.475243  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:21.475300  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:21.505637  299667 cri.go:89] found id: ""
	I1205 07:50:21.505663  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.505672  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:21.505679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:21.505738  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:21.534466  299667 cri.go:89] found id: ""
	I1205 07:50:21.534541  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.534557  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:21.534579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:21.534644  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:21.560428  299667 cri.go:89] found id: ""
	I1205 07:50:21.560453  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.560462  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:21.560472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:21.560530  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:21.584825  299667 cri.go:89] found id: ""
	I1205 07:50:21.584852  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.584860  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:21.584867  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:21.584934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:21.623066  299667 cri.go:89] found id: ""
	I1205 07:50:21.623093  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.623102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:21.623112  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:21.623127  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:21.687398  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:21.687435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:21.702122  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:21.702149  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:21.767031  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:21.767050  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:21.767063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:21.791862  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:21.791895  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.321349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:24.331708  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:24.331778  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:24.369231  299667 cri.go:89] found id: ""
	I1205 07:50:24.369255  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.369264  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:24.369270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:24.369345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:24.397058  299667 cri.go:89] found id: ""
	I1205 07:50:24.397078  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.397088  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:24.397094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:24.397152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:24.425233  299667 cri.go:89] found id: ""
	I1205 07:50:24.425256  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.425264  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:24.425271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:24.425325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:24.451011  299667 cri.go:89] found id: ""
	I1205 07:50:24.451032  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.451041  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:24.451047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:24.451103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:24.475249  299667 cri.go:89] found id: ""
	I1205 07:50:24.475278  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.475287  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:24.475294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:24.475352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:24.500860  299667 cri.go:89] found id: ""
	I1205 07:50:24.500885  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.500895  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:24.500911  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:24.500969  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:24.525728  299667 cri.go:89] found id: ""
	I1205 07:50:24.525751  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.525771  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:24.525778  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:24.525839  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:24.549854  299667 cri.go:89] found id: ""
	I1205 07:50:24.549877  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.549885  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:24.549894  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:24.549923  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:24.574340  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:24.574371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.609821  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:24.609850  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:24.668879  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:24.668917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:24.683025  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:24.683052  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:24.745503  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:27.247317  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:27.258551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:27.258627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:27.282556  299667 cri.go:89] found id: ""
	I1205 07:50:27.282584  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.282594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:27.282601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:27.282685  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:27.311566  299667 cri.go:89] found id: ""
	I1205 07:50:27.311593  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.311602  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:27.311608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:27.311666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:27.336201  299667 cri.go:89] found id: ""
	I1205 07:50:27.336226  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.336235  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:27.336241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:27.336295  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:27.374655  299667 cri.go:89] found id: ""
	I1205 07:50:27.374733  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.374756  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:27.374804  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:27.374881  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:27.403358  299667 cri.go:89] found id: ""
	I1205 07:50:27.403381  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.403390  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:27.403396  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:27.403453  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:27.434322  299667 cri.go:89] found id: ""
	I1205 07:50:27.434347  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.434355  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:27.434362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:27.434430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:27.458621  299667 cri.go:89] found id: ""
	I1205 07:50:27.458643  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.458651  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:27.458669  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:27.458726  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:27.487490  299667 cri.go:89] found id: ""
	I1205 07:50:27.487514  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.487524  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:27.487532  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:27.487543  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:27.515434  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:27.515462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:27.574832  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:27.574864  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:27.588186  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:27.588210  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:27.666339  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:27.666400  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:27.666420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:30.192057  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:30.203579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:30.203657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:30.233613  299667 cri.go:89] found id: ""
	I1205 07:50:30.233663  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.233673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:30.233680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:30.233739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:30.262491  299667 cri.go:89] found id: ""
	I1205 07:50:30.262517  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.262526  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:30.262532  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:30.262599  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:30.292006  299667 cri.go:89] found id: ""
	I1205 07:50:30.292031  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.292042  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:30.292078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:30.292134  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:30.317938  299667 cri.go:89] found id: ""
	I1205 07:50:30.317963  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.317972  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:30.317979  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:30.318037  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:30.359844  299667 cri.go:89] found id: ""
	I1205 07:50:30.359871  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.359880  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:30.359887  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:30.359946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:30.391160  299667 cri.go:89] found id: ""
	I1205 07:50:30.391187  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.391196  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:30.391202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:30.391256  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:30.424091  299667 cri.go:89] found id: ""
	I1205 07:50:30.424116  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.424124  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:30.424131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:30.424186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:30.449137  299667 cri.go:89] found id: ""
	I1205 07:50:30.449184  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.449193  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:30.449204  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:30.449216  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:30.477964  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:30.477990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:30.535174  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:30.535208  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:30.548511  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:30.548537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:30.611856  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:30.611880  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:30.611892  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.137527  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:33.148376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:33.148457  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:33.173779  299667 cri.go:89] found id: ""
	I1205 07:50:33.173802  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.173810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:33.173816  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:33.173893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:33.198637  299667 cri.go:89] found id: ""
	I1205 07:50:33.198661  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.198671  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:33.198678  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:33.198739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:33.227950  299667 cri.go:89] found id: ""
	I1205 07:50:33.227972  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.227980  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:33.227986  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:33.228056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:33.252400  299667 cri.go:89] found id: ""
	I1205 07:50:33.252434  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.252446  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:33.252454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:33.252528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:33.277287  299667 cri.go:89] found id: ""
	I1205 07:50:33.277311  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.277320  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:33.277326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:33.277384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:33.303260  299667 cri.go:89] found id: ""
	I1205 07:50:33.303285  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.303294  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:33.303310  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:33.303387  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:33.327837  299667 cri.go:89] found id: ""
	I1205 07:50:33.327860  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.327868  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:33.327875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:33.327934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:33.361138  299667 cri.go:89] found id: ""
	I1205 07:50:33.361196  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.361206  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:33.361216  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:33.361227  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:33.439490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:33.439534  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:33.454134  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:33.454201  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:33.519248  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:33.519324  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:33.519346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.544362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:33.544404  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:36.073913  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:36.085180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:36.085254  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:36.111524  299667 cri.go:89] found id: ""
	I1205 07:50:36.111549  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.111558  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:36.111565  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:36.111624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:36.136758  299667 cri.go:89] found id: ""
	I1205 07:50:36.136832  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.136856  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:36.136874  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:36.136999  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:36.170081  299667 cri.go:89] found id: ""
	I1205 07:50:36.170105  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.170113  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:36.170120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:36.170177  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:36.194713  299667 cri.go:89] found id: ""
	I1205 07:50:36.194738  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.194747  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:36.194753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:36.194817  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:36.219168  299667 cri.go:89] found id: ""
	I1205 07:50:36.219190  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.219199  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:36.219205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:36.219272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:36.243582  299667 cri.go:89] found id: ""
	I1205 07:50:36.243653  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.243676  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:36.243694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:36.243775  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:36.268659  299667 cri.go:89] found id: ""
	I1205 07:50:36.268730  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.268754  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:36.268771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:36.268853  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:36.293268  299667 cri.go:89] found id: ""
	I1205 07:50:36.293338  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.293361  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:36.293383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:36.293416  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:36.372932  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:36.372960  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:36.372972  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:36.400267  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:36.400358  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:36.432348  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:36.432371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:36.488499  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:36.488533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.002493  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:39.016301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:39.016371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:39.041723  299667 cri.go:89] found id: ""
	I1205 07:50:39.041799  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.041815  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:39.041823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:39.041885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:39.066151  299667 cri.go:89] found id: ""
	I1205 07:50:39.066174  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.066183  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:39.066189  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:39.066266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:39.090650  299667 cri.go:89] found id: ""
	I1205 07:50:39.090673  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.090682  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:39.090688  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:39.090745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:39.119700  299667 cri.go:89] found id: ""
	I1205 07:50:39.119732  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.119740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:39.119747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:39.119810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:39.144307  299667 cri.go:89] found id: ""
	I1205 07:50:39.144369  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.144389  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:39.144406  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:39.144488  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:39.171025  299667 cri.go:89] found id: ""
	I1205 07:50:39.171048  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.171057  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:39.171063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:39.171127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:39.195100  299667 cri.go:89] found id: ""
	I1205 07:50:39.195121  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.195130  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:39.195136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:39.195197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:39.218959  299667 cri.go:89] found id: ""
	I1205 07:50:39.218980  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.218991  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:39.219000  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:39.219010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:39.243315  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:39.243346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:39.270633  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:39.270709  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:39.330141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:39.330172  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.345855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:39.345883  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:39.426940  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:41.928763  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:41.939293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:41.939415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:41.964816  299667 cri.go:89] found id: ""
	I1205 07:50:41.964850  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.964859  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:41.964865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:41.964931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:41.990880  299667 cri.go:89] found id: ""
	I1205 07:50:41.990914  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.990923  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:41.990929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:41.990996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:42.022456  299667 cri.go:89] found id: ""
	I1205 07:50:42.022483  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.022494  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:42.022501  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:42.022570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:42.049261  299667 cri.go:89] found id: ""
	I1205 07:50:42.049328  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.049352  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:42.049369  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:42.049446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:42.077034  299667 cri.go:89] found id: ""
	I1205 07:50:42.077108  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.077134  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:42.077255  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:42.077338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:42.114881  299667 cri.go:89] found id: ""
	I1205 07:50:42.114910  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.114921  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:42.114928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:42.114994  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:42.151897  299667 cri.go:89] found id: ""
	I1205 07:50:42.151926  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.151936  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:42.151944  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:42.152012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:42.185532  299667 cri.go:89] found id: ""
	I1205 07:50:42.185556  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.185565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:42.185574  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:42.185585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:42.246490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:42.246537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:42.262324  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:42.262359  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:42.331135  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:42.331201  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:42.331219  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:42.358803  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:42.358836  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:44.909321  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:44.920001  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:44.920070  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:44.945367  299667 cri.go:89] found id: ""
	I1205 07:50:44.945392  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.945401  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:44.945407  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:44.945463  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:44.970751  299667 cri.go:89] found id: ""
	I1205 07:50:44.970779  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.970788  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:44.970794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:44.970873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:44.999654  299667 cri.go:89] found id: ""
	I1205 07:50:44.999678  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.999688  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:44.999694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:44.999760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:45.065387  299667 cri.go:89] found id: ""
	I1205 07:50:45.065496  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.065521  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:45.065554  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:45.065661  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:45.101338  299667 cri.go:89] found id: ""
	I1205 07:50:45.101365  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.101375  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:45.101386  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:45.101459  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:45.140148  299667 cri.go:89] found id: ""
	I1205 07:50:45.140181  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.140192  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:45.140200  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:45.140301  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:45.178981  299667 cri.go:89] found id: ""
	I1205 07:50:45.179025  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.179035  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:45.179043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:45.179176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:45.219922  299667 cri.go:89] found id: ""
	I1205 07:50:45.219949  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.219958  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:45.219969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:45.219989  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:45.291787  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:45.291824  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:45.306539  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:45.306565  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:45.383110  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:45.383171  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:45.383206  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:45.410722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:45.410808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:47.941304  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:47.952011  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:47.952084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:47.978179  299667 cri.go:89] found id: ""
	I1205 07:50:47.978201  299667 logs.go:282] 0 containers: []
	W1205 07:50:47.978210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:47.978216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:47.978274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:48.005927  299667 cri.go:89] found id: ""
	I1205 07:50:48.005954  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.005964  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:48.005971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:48.006042  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:48.040049  299667 cri.go:89] found id: ""
	I1205 07:50:48.040133  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.040156  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:48.040175  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:48.040269  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:48.066524  299667 cri.go:89] found id: ""
	I1205 07:50:48.066549  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.066558  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:48.066564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:48.066627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:48.096997  299667 cri.go:89] found id: ""
	I1205 07:50:48.097026  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.097036  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:48.097043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:48.097103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:48.123968  299667 cri.go:89] found id: ""
	I1205 07:50:48.123990  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.123999  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:48.124005  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:48.124066  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:48.151529  299667 cri.go:89] found id: ""
	I1205 07:50:48.151554  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.151564  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:48.151570  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:48.151629  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:48.181245  299667 cri.go:89] found id: ""
	I1205 07:50:48.181270  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.181279  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:48.181297  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:48.181308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:48.240786  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:48.240832  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:48.255504  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:48.255533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:48.325828  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:48.325849  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:48.325862  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:48.350818  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:48.350898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:50.887376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:50.898712  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:50.898787  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:50.926387  299667 cri.go:89] found id: ""
	I1205 07:50:50.926412  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.926421  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:50.926428  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:50.926499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:50.951318  299667 cri.go:89] found id: ""
	I1205 07:50:50.951341  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.951349  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:50.951356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:50.951431  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:50.978509  299667 cri.go:89] found id: ""
	I1205 07:50:50.978536  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.978545  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:50.978551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:50.978614  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:51.017851  299667 cri.go:89] found id: ""
	I1205 07:50:51.017875  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.017884  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:51.017894  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:51.017957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:51.048705  299667 cri.go:89] found id: ""
	I1205 07:50:51.048772  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.048797  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:51.048815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:51.048901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:51.078364  299667 cri.go:89] found id: ""
	I1205 07:50:51.078427  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.078448  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:51.078468  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:51.078560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:51.110914  299667 cri.go:89] found id: ""
	I1205 07:50:51.110955  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.110965  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:51.110970  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:51.111064  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:51.136737  299667 cri.go:89] found id: ""
	I1205 07:50:51.136762  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.136771  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:51.136781  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:51.136793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:51.197928  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:51.197949  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:51.197961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:51.222938  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:51.222968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:51.253887  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:51.253914  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:51.309729  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:51.309759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:53.824280  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:53.834821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:53.834895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:53.882567  299667 cri.go:89] found id: ""
	I1205 07:50:53.882607  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.882617  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:53.882623  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:53.882708  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:53.924413  299667 cri.go:89] found id: ""
	I1205 07:50:53.924439  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.924447  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:53.924454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:53.924521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:53.949296  299667 cri.go:89] found id: ""
	I1205 07:50:53.949329  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.949339  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:53.949345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:53.949421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:53.973974  299667 cri.go:89] found id: ""
	I1205 07:50:53.974036  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.974050  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:53.974058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:53.974114  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:53.999073  299667 cri.go:89] found id: ""
	I1205 07:50:53.999139  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.999154  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:53.999162  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:53.999221  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:54.026401  299667 cri.go:89] found id: ""
	I1205 07:50:54.026425  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.026434  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:54.026441  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:54.026523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:54.056156  299667 cri.go:89] found id: ""
	I1205 07:50:54.056181  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.056191  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:54.056197  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:54.056266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:54.080916  299667 cri.go:89] found id: ""
	I1205 07:50:54.080955  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.080964  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:54.080973  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:54.080985  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:54.105836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:54.105870  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:54.134673  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:54.134702  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:54.191141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:54.191175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:54.204290  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:54.204332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:54.267087  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:56.768821  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:56.779222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:56.779288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:56.807155  299667 cri.go:89] found id: ""
	I1205 07:50:56.807179  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.807188  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:56.807195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:56.807280  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:56.831710  299667 cri.go:89] found id: ""
	I1205 07:50:56.831737  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.831746  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:56.831753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:56.831812  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:56.867145  299667 cri.go:89] found id: ""
	I1205 07:50:56.867169  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.867178  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:56.867185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:56.867243  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:56.893127  299667 cri.go:89] found id: ""
	I1205 07:50:56.893152  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.893174  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:56.893180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:56.893237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:56.922421  299667 cri.go:89] found id: ""
	I1205 07:50:56.922450  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.922460  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:56.922466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:56.922543  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:56.945778  299667 cri.go:89] found id: ""
	I1205 07:50:56.945808  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.945817  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:56.945823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:56.945907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:56.974442  299667 cri.go:89] found id: ""
	I1205 07:50:56.974473  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.974482  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:56.974489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:56.974559  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:56.998662  299667 cri.go:89] found id: ""
	I1205 07:50:56.998685  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.998694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:56.998703  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:56.998715  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:57.058833  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:57.058867  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:57.072293  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:57.072322  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:57.139010  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:57.139030  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:57.139042  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:57.163607  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:57.163639  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.693334  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:59.704756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:59.704870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:59.732171  299667 cri.go:89] found id: ""
	I1205 07:50:59.732198  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.732208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:59.732214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:59.732272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:59.757954  299667 cri.go:89] found id: ""
	I1205 07:50:59.757981  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.757990  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:59.757996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:59.758076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:59.787824  299667 cri.go:89] found id: ""
	I1205 07:50:59.787846  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.787855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:59.787862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:59.787977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:59.813474  299667 cri.go:89] found id: ""
	I1205 07:50:59.813497  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.813506  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:59.813512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:59.813580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:59.842057  299667 cri.go:89] found id: ""
	I1205 07:50:59.842079  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.842088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:59.842094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:59.842162  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:59.872569  299667 cri.go:89] found id: ""
	I1205 07:50:59.872593  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.872602  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:59.872608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:59.872671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:59.905410  299667 cri.go:89] found id: ""
	I1205 07:50:59.905435  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.905443  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:59.905450  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:59.905514  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:59.932703  299667 cri.go:89] found id: ""
	I1205 07:50:59.932744  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.932754  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:59.932763  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:59.932774  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.964043  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:59.964069  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:00.020877  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:00.023486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:00.055130  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:00.055166  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:00.182237  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:00.182280  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:00.182298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:02.739834  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:02.750886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:02.750958  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:02.776293  299667 cri.go:89] found id: ""
	I1205 07:51:02.776319  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.776328  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:02.776334  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:02.776393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:02.803043  299667 cri.go:89] found id: ""
	I1205 07:51:02.803080  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.803089  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:02.803096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:02.803176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:02.827935  299667 cri.go:89] found id: ""
	I1205 07:51:02.827957  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.827966  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:02.827972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:02.828031  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:02.859181  299667 cri.go:89] found id: ""
	I1205 07:51:02.859204  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.859215  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:02.859222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:02.859282  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:02.893626  299667 cri.go:89] found id: ""
	I1205 07:51:02.893668  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.893678  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:02.893685  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:02.893755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:02.924778  299667 cri.go:89] found id: ""
	I1205 07:51:02.924808  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.924818  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:02.924830  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:02.924890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:02.950184  299667 cri.go:89] found id: ""
	I1205 07:51:02.950211  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.950220  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:02.950229  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:02.950288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:02.976829  299667 cri.go:89] found id: ""
	I1205 07:51:02.976855  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.976865  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:02.976874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:02.976885  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:03.015998  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:03.016071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:03.072438  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:03.072473  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:03.087250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:03.087283  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:03.153281  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:03.153306  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:03.153319  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:05.678289  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:05.688964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:05.689032  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:05.714382  299667 cri.go:89] found id: ""
	I1205 07:51:05.714403  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.714412  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:05.714419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:05.714486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:05.743946  299667 cri.go:89] found id: ""
	I1205 07:51:05.743968  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.743976  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:05.743983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:05.744043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:05.768270  299667 cri.go:89] found id: ""
	I1205 07:51:05.768293  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.768303  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:05.768309  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:05.768367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:05.795557  299667 cri.go:89] found id: ""
	I1205 07:51:05.795580  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.795588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:05.795595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:05.795652  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:05.820607  299667 cri.go:89] found id: ""
	I1205 07:51:05.820634  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.820643  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:05.820649  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:05.820707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:05.853624  299667 cri.go:89] found id: ""
	I1205 07:51:05.853648  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.853657  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:05.853670  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:05.853752  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:05.885144  299667 cri.go:89] found id: ""
	I1205 07:51:05.885200  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.885213  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:05.885219  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:05.885296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:05.917755  299667 cri.go:89] found id: ""
	I1205 07:51:05.917777  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.917785  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:05.917794  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:05.917808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:05.978242  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:05.978286  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:05.992931  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:05.992961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:06.070949  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:06.070979  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:06.070992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:06.096749  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:06.096780  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.634532  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:08.646959  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:08.647038  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:08.678851  299667 cri.go:89] found id: ""
	I1205 07:51:08.678875  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.678884  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:08.678890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:08.678954  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:08.702970  299667 cri.go:89] found id: ""
	I1205 07:51:08.702992  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.703001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:08.703006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:08.703063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:08.727238  299667 cri.go:89] found id: ""
	I1205 07:51:08.727259  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.727267  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:08.727273  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:08.727329  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:08.752084  299667 cri.go:89] found id: ""
	I1205 07:51:08.752106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.752114  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:08.752120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:08.752183  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:08.775775  299667 cri.go:89] found id: ""
	I1205 07:51:08.775797  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.775805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:08.775811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:08.775878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:08.800101  299667 cri.go:89] found id: ""
	I1205 07:51:08.800122  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.800130  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:08.800136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:08.800193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:08.826081  299667 cri.go:89] found id: ""
	I1205 07:51:08.826106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.826115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:08.826121  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:08.826179  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:08.850937  299667 cri.go:89] found id: ""
	I1205 07:51:08.850969  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.850979  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:08.850987  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:08.851004  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.884057  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:08.884093  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:08.946750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:08.946793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:08.960852  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:08.960880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:09.030565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:09.030587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:09.030601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:11.556651  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:11.567626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:11.567701  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:11.595760  299667 cri.go:89] found id: ""
	I1205 07:51:11.595786  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.595795  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:11.595802  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:11.595859  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:11.646030  299667 cri.go:89] found id: ""
	I1205 07:51:11.646056  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.646065  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:11.646072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:11.646138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:11.675282  299667 cri.go:89] found id: ""
	I1205 07:51:11.675310  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.675319  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:11.675325  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:11.675385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:11.699688  299667 cri.go:89] found id: ""
	I1205 07:51:11.699712  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.699721  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:11.699727  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:11.699791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:11.723819  299667 cri.go:89] found id: ""
	I1205 07:51:11.723843  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.723852  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:11.723859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:11.723915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:11.751470  299667 cri.go:89] found id: ""
	I1205 07:51:11.751496  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.751505  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:11.751512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:11.751568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:11.775893  299667 cri.go:89] found id: ""
	I1205 07:51:11.775921  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.775929  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:11.775936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:11.775993  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:11.802990  299667 cri.go:89] found id: ""
	I1205 07:51:11.803012  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.803021  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:11.803033  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:11.803044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:11.859684  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:11.859767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:11.876859  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:11.876889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:11.952118  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:11.944168   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.944893   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.946566   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.947157   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.948800   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:11.944168   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.944893   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.946566   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.947157   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.948800   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:11.952191  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:11.952220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:11.976596  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:11.976630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:14.510895  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:14.522084  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:14.522151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:14.554050  299667 cri.go:89] found id: ""
	I1205 07:51:14.554069  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.554078  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:14.554084  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:14.554139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:14.581712  299667 cri.go:89] found id: ""
	I1205 07:51:14.581732  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.581740  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:14.581746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:14.581810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:14.658701  299667 cri.go:89] found id: ""
	I1205 07:51:14.658723  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.658731  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:14.658737  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:14.658803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:14.686921  299667 cri.go:89] found id: ""
	I1205 07:51:14.686940  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.686948  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:14.686954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:14.687024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:14.720928  299667 cri.go:89] found id: ""
	I1205 07:51:14.720949  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.720957  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:14.720972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:14.721046  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:14.758959  299667 cri.go:89] found id: ""
	I1205 07:51:14.758983  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.758992  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:14.758998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:14.759054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:14.810754  299667 cri.go:89] found id: ""
	I1205 07:51:14.810775  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.810888  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:14.810895  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:14.810966  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:14.865350  299667 cri.go:89] found id: ""
	I1205 07:51:14.865369  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.865379  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:14.865387  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:14.865398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:14.920139  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:14.920170  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:14.973197  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:14.973224  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:15.042929  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:15.042968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:15.069350  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:15.069377  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:15.167229  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:15.157061   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.158379   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.159455   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.160498   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.161615   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:15.157061   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.158379   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.159455   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.160498   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.161615   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:17.667454  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:17.677695  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:17.677767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:17.710656  299667 cri.go:89] found id: ""
	I1205 07:51:17.710678  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.710687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:17.710693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:17.710755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:17.738643  299667 cri.go:89] found id: ""
	I1205 07:51:17.738665  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.738674  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:17.738680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:17.738736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:17.762784  299667 cri.go:89] found id: ""
	I1205 07:51:17.762806  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.762815  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:17.762821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:17.762880  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:17.788678  299667 cri.go:89] found id: ""
	I1205 07:51:17.788699  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.788714  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:17.788720  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:17.788776  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:17.818009  299667 cri.go:89] found id: ""
	I1205 07:51:17.818031  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.818040  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:17.818046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:17.818103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:17.850251  299667 cri.go:89] found id: ""
	I1205 07:51:17.850272  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.850288  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:17.850295  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:17.850354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:17.879482  299667 cri.go:89] found id: ""
	I1205 07:51:17.879503  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.879512  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:17.879518  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:17.879579  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:17.916240  299667 cri.go:89] found id: ""
	I1205 07:51:17.916261  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.916270  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:17.916278  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:17.916344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:17.945888  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:17.945915  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:18.004030  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:18.004079  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:18.022346  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:18.022422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:18.096445  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:18.087987   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.088572   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090232   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090775   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.092338   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:18.087987   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.088572   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090232   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090775   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.092338   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:18.096468  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:18.096481  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:20.623691  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:20.635279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:20.635409  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:20.670295  299667 cri.go:89] found id: ""
	I1205 07:51:20.670369  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.670390  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:20.670410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:20.670493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:20.701924  299667 cri.go:89] found id: ""
	I1205 07:51:20.701948  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.701957  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:20.701964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:20.702055  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:20.727557  299667 cri.go:89] found id: ""
	I1205 07:51:20.727599  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.727622  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:20.727638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:20.727714  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:20.753615  299667 cri.go:89] found id: ""
	I1205 07:51:20.753640  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.753648  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:20.753655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:20.753744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:20.778426  299667 cri.go:89] found id: ""
	I1205 07:51:20.778450  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.778459  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:20.778466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:20.778556  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:20.803580  299667 cri.go:89] found id: ""
	I1205 07:51:20.803605  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.803615  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:20.803638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:20.803707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:20.833142  299667 cri.go:89] found id: ""
	I1205 07:51:20.833193  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.833202  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:20.833208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:20.833285  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:20.868368  299667 cri.go:89] found id: ""
	I1205 07:51:20.868443  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.868465  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:20.868486  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:20.868523  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:20.895451  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:20.895524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:20.926652  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:20.926677  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:20.981657  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:20.981692  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:20.995302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:20.995329  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:21.064074  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:21.055838   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.056503   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.058334   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.059023   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.060931   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:21.055838   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.056503   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.058334   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.059023   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.060931   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:23.564875  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:23.575583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:23.575650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:23.616208  299667 cri.go:89] found id: ""
	I1205 07:51:23.616234  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.616243  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:23.616251  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:23.616314  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:23.645044  299667 cri.go:89] found id: ""
	I1205 07:51:23.645068  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.645077  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:23.645083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:23.645148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:23.679840  299667 cri.go:89] found id: ""
	I1205 07:51:23.679861  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.679870  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:23.679876  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:23.679931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:23.704932  299667 cri.go:89] found id: ""
	I1205 07:51:23.704954  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.704962  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:23.704980  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:23.705040  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:23.730380  299667 cri.go:89] found id: ""
	I1205 07:51:23.730403  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.730411  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:23.730418  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:23.730483  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:23.754200  299667 cri.go:89] found id: ""
	I1205 07:51:23.754224  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.754233  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:23.754240  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:23.754318  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:23.778888  299667 cri.go:89] found id: ""
	I1205 07:51:23.778913  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.778921  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:23.778927  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:23.778983  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:23.803021  299667 cri.go:89] found id: ""
	I1205 07:51:23.803045  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.803054  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:23.803063  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:23.803074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:23.859725  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:23.859805  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:23.878639  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:23.878714  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:23.953245  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:23.945764   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.946559   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948198   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948513   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.950053   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:23.945764   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.946559   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948198   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948513   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.950053   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:23.953267  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:23.953280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:23.978428  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:23.978460  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:26.510161  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:26.520589  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:26.520663  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:26.545475  299667 cri.go:89] found id: ""
	I1205 07:51:26.545500  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.545508  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:26.545515  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:26.545570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:26.570378  299667 cri.go:89] found id: ""
	I1205 07:51:26.570401  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.570409  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:26.570416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:26.570476  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:26.596521  299667 cri.go:89] found id: ""
	I1205 07:51:26.596547  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.596556  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:26.596562  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:26.596618  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:26.624228  299667 cri.go:89] found id: ""
	I1205 07:51:26.624255  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.624264  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:26.624280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:26.624336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:26.650763  299667 cri.go:89] found id: ""
	I1205 07:51:26.650797  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.650807  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:26.650813  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:26.650870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:26.681944  299667 cri.go:89] found id: ""
	I1205 07:51:26.681972  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.681980  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:26.681987  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:26.682043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:26.706897  299667 cri.go:89] found id: ""
	I1205 07:51:26.706918  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.706927  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:26.706933  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:26.706991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:26.732536  299667 cri.go:89] found id: ""
	I1205 07:51:26.732560  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.732569  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:26.732578  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:26.732619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:26.789640  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:26.789673  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:26.803060  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:26.803089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:26.884697  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:26.872770   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.877391   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879063   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879460   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.881003   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:26.872770   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.877391   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879063   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879460   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.881003   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:26.884720  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:26.884737  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:26.912821  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:26.912856  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:29.445153  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:29.455673  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:29.455740  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:29.479669  299667 cri.go:89] found id: ""
	I1205 07:51:29.479694  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.479702  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:29.479709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:29.479768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:29.504129  299667 cri.go:89] found id: ""
	I1205 07:51:29.504151  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.504160  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:29.504166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:29.504223  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:29.528037  299667 cri.go:89] found id: ""
	I1205 07:51:29.528061  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.528071  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:29.528077  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:29.528137  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:29.553104  299667 cri.go:89] found id: ""
	I1205 07:51:29.553129  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.553138  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:29.553145  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:29.553252  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:29.582155  299667 cri.go:89] found id: ""
	I1205 07:51:29.582180  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.582189  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:29.582195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:29.582251  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:29.616156  299667 cri.go:89] found id: ""
	I1205 07:51:29.616181  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.616190  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:29.616205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:29.616279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:29.643373  299667 cri.go:89] found id: ""
	I1205 07:51:29.643399  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.643407  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:29.643413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:29.643474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:29.669624  299667 cri.go:89] found id: ""
	I1205 07:51:29.669649  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.669658  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:29.669667  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:29.669678  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:29.725864  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:29.725897  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:29.739284  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:29.739311  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:29.812338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:29.804736   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.805417   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807055   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807553   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.809095   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:29.804736   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.805417   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807055   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807553   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.809095   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:29.812358  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:29.812371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:29.837776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:29.837808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:32.374773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:32.385440  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:32.385519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:32.410264  299667 cri.go:89] found id: ""
	I1205 07:51:32.410285  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.410294  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:32.410301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:32.410380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:32.435693  299667 cri.go:89] found id: ""
	I1205 07:51:32.435716  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.435724  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:32.435730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:32.435789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:32.459782  299667 cri.go:89] found id: ""
	I1205 07:51:32.459854  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.459865  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:32.459872  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:32.460140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:32.490196  299667 cri.go:89] found id: ""
	I1205 07:51:32.490221  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.490230  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:32.490236  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:32.490302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:32.515432  299667 cri.go:89] found id: ""
	I1205 07:51:32.515456  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.515465  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:32.515472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:32.515535  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:32.544631  299667 cri.go:89] found id: ""
	I1205 07:51:32.544657  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.544666  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:32.544672  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:32.544733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:32.568734  299667 cri.go:89] found id: ""
	I1205 07:51:32.568759  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.568768  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:32.568785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:32.568841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:32.593347  299667 cri.go:89] found id: ""
	I1205 07:51:32.593375  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.593385  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:32.593394  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:32.593406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:32.663939  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:32.663975  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:32.678486  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:32.678514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:32.740819  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:32.733560   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.734160   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.735620   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.736048   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.737671   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:32.733560   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.734160   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.735620   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.736048   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.737671   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:32.740842  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:32.740854  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:32.765510  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:32.765539  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:35.296522  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:35.310277  299667 out.go:203] 
	W1205 07:51:35.313261  299667 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 07:51:35.313316  299667 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 07:51:35.313333  299667 out.go:285] * Related issues:
	* Related issues:
	W1205 07:51:35.313353  299667 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1205 07:51:35.313373  299667 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1205 07:51:35.316371  299667 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-622440
helpers_test.go:243: (dbg) docker inspect newest-cni-622440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	        "Created": "2025-12-05T07:34:55.965403434Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:45:25.584904359Z",
	            "FinishedAt": "2025-12-05T07:45:24.024543459Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hosts",
	        "LogPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4-json.log",
	        "Name": "/newest-cni-622440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-622440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-622440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	                "LowerDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-622440",
	                "Source": "/var/lib/docker/volumes/newest-cni-622440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-622440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-622440",
	                "name.minikube.sigs.k8s.io": "newest-cni-622440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed9530bf43b75054636d02a5c2e26f04f7734993d5bbcca1755d31d58cd478eb",
	            "SandboxKey": "/var/run/docker/netns/ed9530bf43b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-622440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:fd:48:71:b9:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "96c6294e00fc4b96dda84202da479b822dd69419748060a344f1800d21559cfe",
	                    "EndpointID": "58c3f199e7d48a7db52c99942eb204475e9d0d215b5c84cb3379d82aa57f00e6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-622440",
	                        "9420074472d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (348.526982ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-622440 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-622440 logs -n 25: (1.77598261s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ stop    │ -p no-preload-241270 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p no-preload-241270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	│ stop    │ -p newest-cni-622440 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-622440 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:45:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:45:25.089760  299667 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:25.090022  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090052  299667 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:25.090069  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090384  299667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:45:25.090842  299667 out.go:368] Setting JSON to false
	I1205 07:45:25.091806  299667 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8872,"bootTime":1764911853,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:45:25.091916  299667 start.go:143] virtualization:  
	I1205 07:45:25.094988  299667 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:25.098817  299667 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:25.098909  299667 notify.go:221] Checking for updates...
	I1205 07:45:25.105041  299667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:25.108085  299667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:25.111075  299667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:45:25.114070  299667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:25.117093  299667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:25.120796  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:25.121387  299667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:25.146702  299667 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:25.146810  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.201970  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.192879595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.202086  299667 docker.go:319] overlay module found
	I1205 07:45:25.205420  299667 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:25.208200  299667 start.go:309] selected driver: docker
	I1205 07:45:25.208216  299667 start.go:927] validating driver "docker" against &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.208322  299667 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:25.209018  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.271889  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.262935561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.272253  299667 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:45:25.272290  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:25.272360  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:25.272408  299667 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.275549  299667 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:45:25.278335  299667 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:45:25.281398  299667 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:25.284371  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:25.284526  299667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:25.304420  299667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:25.304443  299667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:45:25.350688  299667 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:45:25.522612  299667 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:45:25.522872  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.522902  299667 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.522986  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:45:25.522997  299667 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.314µs
	I1205 07:45:25.523010  299667 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:45:25.523020  299667 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523050  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:45:25.523054  299667 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.177µs
	I1205 07:45:25.523060  299667 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523070  299667 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523108  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:45:25.523117  299667 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.906µs
	I1205 07:45:25.523123  299667 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523137  299667 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523144  299667 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:25.523164  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:45:25.523170  299667 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 37.867µs
	I1205 07:45:25.523176  299667 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523180  299667 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523184  299667 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523220  299667 start.go:364] duration metric: took 26.043µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:45:25.523232  299667 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:25.523223  299667 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523248  299667 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:45:25.523282  299667 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.595µs
	I1205 07:45:25.523288  299667 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:45:25.523289  299667 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523237  299667 fix.go:54] fixHost starting: 
	I1205 07:45:25.523319  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:45:25.523328  299667 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 144.182µs
	I1205 07:45:25.523335  299667 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523296  299667 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 85.228µs
	I1205 07:45:25.523346  299667 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:45:25.523368  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:45:25.523373  299667 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 85.498µs
	I1205 07:45:25.523378  299667 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:45:25.523390  299667 cache.go:87] Successfully saved all images to host disk.
	I1205 07:45:25.523585  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.542111  299667 fix.go:112] recreateIfNeeded on newest-cni-622440: state=Stopped err=<nil>
	W1205 07:45:25.542142  299667 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:45:26.103157  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:26.555898  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:26.616440  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:26.616472  297527 retry.go:31] will retry after 4.350402654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.227883  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:27.290238  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.290274  297527 retry.go:31] will retry after 4.46337589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:28.602428  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:25.545608  299667 out.go:252] * Restarting existing docker container for "newest-cni-622440" ...
	I1205 07:45:25.545717  299667 cli_runner.go:164] Run: docker start newest-cni-622440
	I1205 07:45:25.826053  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.856383  299667 kic.go:430] container "newest-cni-622440" state is running.
	I1205 07:45:25.856775  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:25.877321  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.877542  299667 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:25.878047  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:25.903226  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:25.903553  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:25.903561  299667 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:25.904107  299667 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35184->127.0.0.1:33103: read: connection reset by peer
	I1205 07:45:29.056730  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.056754  299667 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:45:29.056818  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.074923  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.075238  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.075256  299667 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:45:29.238817  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.238924  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.256394  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.256698  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.256720  299667 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:29.409360  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:29.409384  299667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:45:29.409403  299667 ubuntu.go:190] setting up certificates
	I1205 07:45:29.409412  299667 provision.go:84] configureAuth start
	I1205 07:45:29.409469  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:29.426522  299667 provision.go:143] copyHostCerts
	I1205 07:45:29.426598  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:45:29.426610  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:45:29.426695  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:45:29.426806  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:45:29.426817  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:45:29.426846  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:45:29.426910  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:45:29.426920  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:45:29.426946  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:45:29.427008  299667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:45:29.583992  299667 provision.go:177] copyRemoteCerts
	I1205 07:45:29.584079  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:29.584142  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.601241  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.705331  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:45:29.723929  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:45:29.741035  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:45:29.758654  299667 provision.go:87] duration metric: took 349.219709ms to configureAuth
	I1205 07:45:29.758682  299667 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:29.758882  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:29.758893  299667 machine.go:97] duration metric: took 3.881342431s to provisionDockerMachine
	I1205 07:45:29.758901  299667 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:45:29.758917  299667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:29.758966  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:29.759008  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.777016  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.881927  299667 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:29.889885  299667 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:29.889915  299667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:29.889927  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:45:29.889986  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:45:29.890075  299667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:45:29.890181  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:29.899716  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:29.920554  299667 start.go:296] duration metric: took 161.628343ms for postStartSetup
	I1205 07:45:29.920647  299667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:29.920717  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.938834  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.040045  299667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:30.045649  299667 fix.go:56] duration metric: took 4.522402293s for fixHost
	I1205 07:45:30.045683  299667 start.go:83] releasing machines lock for "newest-cni-622440", held for 4.522453444s
	I1205 07:45:30.045767  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:30.065623  299667 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:30.065678  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.065694  299667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:30.065761  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.087940  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.099183  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.281502  299667 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:30.288110  299667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:30.292481  299667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:30.292550  299667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:30.300562  299667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:30.300584  299667 start.go:496] detecting cgroup driver to use...
	I1205 07:45:30.300616  299667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:30.300666  299667 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:45:30.318364  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:45:30.332088  299667 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:30.332151  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:30.348258  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:30.361775  299667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:30.469361  299667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:30.577441  299667 docker.go:234] disabling docker service ...
	I1205 07:45:30.577508  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:30.592915  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:30.607578  299667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:30.752107  299667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:30.872747  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:30.888408  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:30.904134  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:45:30.914385  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:45:30.923315  299667 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:45:30.923423  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:45:30.932175  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.940943  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:45:30.949729  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.958228  299667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:30.965941  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:45:30.980042  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:45:30.995740  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:45:31.009747  299667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:31.019595  299667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:31.028525  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.153254  299667 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:45:31.252043  299667 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:45:31.252123  299667 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:45:31.255724  299667 start.go:564] Will wait 60s for crictl version
	I1205 07:45:31.255784  299667 ssh_runner.go:195] Run: which crictl
	I1205 07:45:31.259402  299667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:31.288033  299667 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:45:31.288102  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.310723  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.334839  299667 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:45:31.337671  299667 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:31.359874  299667 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:31.365663  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.387524  299667 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:45:31.390412  299667 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:31.390547  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:31.390648  299667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:31.429142  299667 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:45:31.429206  299667 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:31.429215  299667 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:45:31.429338  299667 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:31.429419  299667 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:45:31.463460  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:31.463487  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:31.463511  299667 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:45:31.463580  299667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:31.463714  299667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:31.463789  299667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:45:31.471606  299667 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:31.471702  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:31.480080  299667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:45:31.492950  299667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:45:31.505530  299667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:45:31.518323  299667 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:31.521961  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.531618  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.655593  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:31.673339  299667 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:45:31.673398  299667 certs.go:195] generating shared ca certs ...
	I1205 07:45:31.673427  299667 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:31.673592  299667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:45:31.673665  299667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:45:31.673695  299667 certs.go:257] generating profile certs ...
	I1205 07:45:31.673812  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:45:31.673907  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:45:31.673970  299667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:45:31.674103  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:45:31.674164  299667 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:31.674197  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:31.674246  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:45:31.674289  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:31.674341  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:31.674413  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:31.675038  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:31.699874  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:31.718981  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:31.739011  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:31.757897  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:45:31.776123  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:31.794286  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:31.815714  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:45:31.832875  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:31.851417  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:45:31.868401  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:45:31.885858  299667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:31.898468  299667 ssh_runner.go:195] Run: openssl version
	I1205 07:45:31.904594  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.911851  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:45:31.919124  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922684  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922758  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.963682  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:31.970739  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.977808  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:31.985046  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988699  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988790  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:32.029966  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:32.037736  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.045196  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:45:32.052663  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056573  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056689  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.097976  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:32.106452  299667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:32.110712  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:32.154012  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:32.194946  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:32.235499  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:32.276192  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:32.316778  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:32.357969  299667 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:32.358063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:32.358128  299667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:32.393923  299667 cri.go:89] found id: ""
	I1205 07:45:32.393993  299667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:32.401825  299667 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:32.401893  299667 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:32.401977  299667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:32.409190  299667 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:32.409869  299667 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.410186  299667 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-622440" cluster setting kubeconfig missing "newest-cni-622440" context setting]
	I1205 07:45:32.410754  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.412652  299667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:32.420082  299667 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1205 07:45:32.420112  299667 kubeadm.go:602] duration metric: took 18.200733ms to restartPrimaryControlPlane
	I1205 07:45:32.420122  299667 kubeadm.go:403] duration metric: took 62.162615ms to StartCluster
	I1205 07:45:32.420136  299667 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.420193  299667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.421089  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.421340  299667 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:45:32.421617  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:32.421690  299667 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:32.421796  299667 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-622440"
	I1205 07:45:32.421816  299667 addons.go:70] Setting default-storageclass=true in profile "newest-cni-622440"
	I1205 07:45:32.421860  299667 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-622440"
	I1205 07:45:32.421826  299667 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-622440"
	I1205 07:45:32.421949  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.422169  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.422375  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.421807  299667 addons.go:70] Setting dashboard=true in profile "newest-cni-622440"
	I1205 07:45:32.422859  299667 addons.go:239] Setting addon dashboard=true in "newest-cni-622440"
	W1205 07:45:32.422869  299667 addons.go:248] addon dashboard should already be in state true
	I1205 07:45:32.422895  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.423306  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.425911  299667 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:32.429270  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:32.459552  299667 addons.go:239] Setting addon default-storageclass=true in "newest-cni-622440"
	I1205 07:45:32.459590  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.459994  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.466676  299667 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:45:32.469573  299667 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:45:32.469693  299667 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.469710  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:45:32.469779  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.479022  299667 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:45:30.602600  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:30.967025  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:31.052948  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.052985  297527 retry.go:31] will retry after 7.944795354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.285879  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:31.386500  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.386531  297527 retry.go:31] will retry after 6.357223814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.754709  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:31.845913  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.845950  297527 retry.go:31] will retry after 12.860014736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.103254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:32.484603  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:45:32.484629  299667 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:45:32.484694  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.517396  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.529599  299667 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.529620  299667 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:45:32.529685  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.549325  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.574838  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.643911  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:32.670090  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.687313  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:45:32.687343  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:45:32.721498  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:45:32.721518  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:45:32.728026  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.759870  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:45:32.759892  299667 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:45:32.773100  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:45:32.773119  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:45:32.790813  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:45:32.790887  299667 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:45:32.806943  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:45:32.807008  299667 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:45:32.827525  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:45:32.827547  299667 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:45:32.840144  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:45:32.840166  299667 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:45:32.856122  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:32.856196  299667 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:45:32.869771  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:33.097468  299667 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:45:33.097593  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:33.097728  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097794  299667 retry.go:31] will retry after 241.658936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.097872  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097907  299667 retry.go:31] will retry after 176.603947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.098118  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.098157  299667 retry.go:31] will retry after 229.408257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.275635  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:33.328106  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.333654  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.333699  299667 retry.go:31] will retry after 493.072495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.339842  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:33.420976  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421140  299667 retry.go:31] will retry after 232.443098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.421103  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421275  299667 retry.go:31] will retry after 218.243264ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.598377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:33.640183  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:33.654611  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.714507  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.714586  299667 retry.go:31] will retry after 296.021108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.735889  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.735929  299667 retry.go:31] will retry after 647.569018ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.827334  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:33.912321  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.912410  299667 retry.go:31] will retry after 511.925432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.011792  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:34.070223  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.070270  299667 retry.go:31] will retry after 1.045041767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.098366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:34.384609  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:34.425097  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:34.456662  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.456771  299667 retry.go:31] will retry after 1.012360732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:34.490780  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.490815  299667 retry.go:31] will retry after 673.94662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.598028  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:35.602346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:37.602757  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:37.744028  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.809224  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.809268  297527 retry.go:31] will retry after 8.525278844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.998921  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:39.069453  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.069501  297527 retry.go:31] will retry after 21.498999078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.097803  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:35.115652  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:35.165241  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:35.189445  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.189528  299667 retry.go:31] will retry after 873.335351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:35.234071  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.234107  299667 retry.go:31] will retry after 1.250813401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.469343  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:35.535355  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.535386  299667 retry.go:31] will retry after 1.457971594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.598793  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.063166  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:36.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:36.141912  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.141992  299667 retry.go:31] will retry after 1.289648417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.485696  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:36.544841  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.544879  299667 retry.go:31] will retry after 2.662984572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.598226  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.993607  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.063691  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.063774  299667 retry.go:31] will retry after 1.151172803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.098032  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:37.431865  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:37.492142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.492177  299667 retry.go:31] will retry after 3.504601193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.598357  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.098363  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.215346  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:38.274274  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.274309  299667 retry.go:31] will retry after 1.757329115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.597749  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.097719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.208847  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:39.266142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.266182  299667 retry.go:31] will retry after 3.436463849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.598395  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.031973  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:40.102833  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:42.602360  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:44.706625  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:40.092374  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.092409  299667 retry.go:31] will retry after 2.182976597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.098469  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.598422  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.997583  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:41.059423  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.059455  299667 retry.go:31] will retry after 3.560419221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.098613  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:41.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.098453  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.276211  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:42.351488  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.351524  299667 retry.go:31] will retry after 9.602308898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.598167  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.703420  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:42.760290  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.760322  299667 retry.go:31] will retry after 5.381602643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:43.097810  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:43.597706  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.098335  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.597780  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.620405  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:44.677458  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.677489  299667 retry.go:31] will retry after 4.279612118s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:44.764830  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.764865  297527 retry.go:31] will retry after 17.369945393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:45.102956  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:46.334817  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:46.418483  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:46.418521  297527 retry.go:31] will retry after 23.303020683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:47.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:49.602799  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:45.098273  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:45.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.597868  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.097740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.597768  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.097748  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.142199  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:48.202751  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.202784  299667 retry.go:31] will retry after 9.130347643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.958075  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:49.020580  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.020664  299667 retry.go:31] will retry after 5.816091686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:49.597778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:52.102357  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:54.603289  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:50.097903  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:50.598277  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.098323  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.598320  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.954438  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:52.018482  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.018522  299667 retry.go:31] will retry after 11.887626777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.098608  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:52.598374  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.098377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.098330  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.597906  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.837992  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:54.928421  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:54.928451  299667 retry.go:31] will retry after 21.232814528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:57.103152  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:59.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:55.097998  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:55.598566  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.098233  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.598487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.333368  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:57.391373  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.391409  299667 retry.go:31] will retry after 6.534046571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.598447  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.098487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.597673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.098584  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.597752  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.568740  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:00.647111  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:00.647143  297527 retry.go:31] will retry after 19.124891194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:01.602386  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:02.135738  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:02.196508  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:02.196541  297527 retry.go:31] will retry after 23.234297555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:00.111473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.597738  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.097860  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.597786  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.598349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.097778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.906517  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:03.926085  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:03.977088  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:03.977126  299667 retry.go:31] will retry after 8.615984736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.014857  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.014953  299667 retry.go:31] will retry after 11.096851447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.098074  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:04.598727  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:06.103226  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:08.602282  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:09.722604  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:05.098302  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:05.598378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.098313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.098365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.597739  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.597740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.098581  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.598396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:09.788810  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:09.788894  297527 retry.go:31] will retry after 37.030083188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:10.602342  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:13.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:10.098145  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:10.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.097819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.598431  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.098421  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.593706  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:12.598498  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:12.687257  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:12.687290  299667 retry.go:31] will retry after 19.919210015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:13.098633  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:13.598345  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.097716  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.598398  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:15.602302  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:17.603239  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:15.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:15.112618  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:15.170666  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.170700  299667 retry.go:31] will retry after 26.586504873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.598228  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.161584  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:16.224162  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.224193  299667 retry.go:31] will retry after 29.423350117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.597722  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.097721  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.597743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.098656  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.598271  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.098404  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.598719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.772903  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:19.832639  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:19.832668  297527 retry.go:31] will retry after 32.800355392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:20.103191  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:22.602639  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:24.603138  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:20.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:20.597725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.097770  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.598319  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.097718  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.098368  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.598400  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.431569  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:25.488990  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:25.489023  297527 retry.go:31] will retry after 28.819883279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:27.102333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:29.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:25.098708  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.597766  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.098393  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.598238  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.098573  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.598365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.598524  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.097726  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.598366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:31.103394  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:33.602924  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:30.098021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:30.598337  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.098378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.097725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.597622  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:32.597702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:32.607176  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:32.654366  299667 cri.go:89] found id: ""
	I1205 07:46:32.654387  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.654395  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:32.654402  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:32.654460  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:46:32.707430  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707464  299667 retry.go:31] will retry after 35.686554771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707503  299667 cri.go:89] found id: ""
	I1205 07:46:32.707512  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.707519  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:32.707525  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:32.707583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:32.732319  299667 cri.go:89] found id: ""
	I1205 07:46:32.732341  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.732350  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:32.732356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:32.732414  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:32.756204  299667 cri.go:89] found id: ""
	I1205 07:46:32.756226  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.756235  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:32.756241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:32.756313  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:32.785401  299667 cri.go:89] found id: ""
	I1205 07:46:32.785423  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.785431  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:32.785437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:32.785493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:32.811348  299667 cri.go:89] found id: ""
	I1205 07:46:32.811373  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.811381  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:32.811388  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:32.811461  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:32.835578  299667 cri.go:89] found id: ""
	I1205 07:46:32.835603  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.835612  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:32.835618  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:32.835679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:32.861749  299667 cri.go:89] found id: ""
	I1205 07:46:32.861773  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.861781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:32.861790  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:32.861801  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:32.937533  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:32.937555  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:32.937568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:32.962127  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:32.962161  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:32.989223  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:32.989256  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:33.046092  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:33.046128  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:46:36.102426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:38.602828  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:35.559882  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:35.570602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:35.570679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:35.597322  299667 cri.go:89] found id: ""
	I1205 07:46:35.597348  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.597358  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:35.597364  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:35.597420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:35.631556  299667 cri.go:89] found id: ""
	I1205 07:46:35.631585  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.631594  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:35.631605  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:35.631670  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:35.666766  299667 cri.go:89] found id: ""
	I1205 07:46:35.666790  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.666808  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:35.666851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:35.666928  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:35.696469  299667 cri.go:89] found id: ""
	I1205 07:46:35.696494  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.696503  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:35.696510  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:35.696570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:35.721564  299667 cri.go:89] found id: ""
	I1205 07:46:35.721587  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.721613  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:35.721620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:35.721679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:35.750450  299667 cri.go:89] found id: ""
	I1205 07:46:35.750474  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.750483  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:35.750490  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:35.750577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:35.779075  299667 cri.go:89] found id: ""
	I1205 07:46:35.779097  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.779105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:35.779111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:35.779171  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:35.804778  299667 cri.go:89] found id: ""
	I1205 07:46:35.804849  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.804870  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:35.804891  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:35.804928  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:35.818664  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:35.818691  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:35.896985  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:35.897010  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:35.897023  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:35.922964  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:35.922997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:35.950985  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:35.951012  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.510773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:38.521214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:38.521283  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:38.547037  299667 cri.go:89] found id: ""
	I1205 07:46:38.547061  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.547069  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:38.547088  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:38.547152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:38.571870  299667 cri.go:89] found id: ""
	I1205 07:46:38.571894  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.571903  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:38.571909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:38.571967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:38.597667  299667 cri.go:89] found id: ""
	I1205 07:46:38.597693  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.597701  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:38.597707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:38.597781  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:38.634302  299667 cri.go:89] found id: ""
	I1205 07:46:38.634328  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.634336  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:38.634343  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:38.634411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:38.662787  299667 cri.go:89] found id: ""
	I1205 07:46:38.662813  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.662822  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:38.662829  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:38.662886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:38.688000  299667 cri.go:89] found id: ""
	I1205 07:46:38.688026  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.688034  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:38.688040  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:38.688108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:38.712589  299667 cri.go:89] found id: ""
	I1205 07:46:38.712611  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.712619  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:38.712631  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:38.712688  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:38.736469  299667 cri.go:89] found id: ""
	I1205 07:46:38.736490  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.736499  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:38.736507  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:38.736521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:38.763556  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:38.763586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.818344  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:38.818379  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:38.832020  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:38.832054  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:38.931143  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:38.931164  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:38.931178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:40.603153  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:43.102740  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:41.457376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:41.468655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:41.468729  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:41.496317  299667 cri.go:89] found id: ""
	I1205 07:46:41.496391  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.496415  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:41.496434  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:41.496520  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:41.522205  299667 cri.go:89] found id: ""
	I1205 07:46:41.522230  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.522238  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:41.522244  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:41.522304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:41.547643  299667 cri.go:89] found id: ""
	I1205 07:46:41.547668  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.547677  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:41.547684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:41.547743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:41.576000  299667 cri.go:89] found id: ""
	I1205 07:46:41.576024  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.576032  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:41.576039  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:41.576093  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:41.610347  299667 cri.go:89] found id: ""
	I1205 07:46:41.610373  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.610393  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:41.610399  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:41.610455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:41.641947  299667 cri.go:89] found id: ""
	I1205 07:46:41.641974  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.641983  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:41.641990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:41.642049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:41.680331  299667 cri.go:89] found id: ""
	I1205 07:46:41.680355  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.680363  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:41.680370  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:41.680426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:41.707279  299667 cri.go:89] found id: ""
	I1205 07:46:41.707301  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.707310  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:41.707319  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:41.707331  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:41.720629  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:41.720654  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 07:46:41.757919  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:41.789558  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:41.789582  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:41.789596  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:41.829441  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.829475  299667 retry.go:31] will retry after 23.380573162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.840285  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:41.840316  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:41.875962  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:41.875990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.439978  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:44.450947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:44.451025  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:44.476311  299667 cri.go:89] found id: ""
	I1205 07:46:44.476335  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.476344  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:44.476350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:44.476420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:44.501030  299667 cri.go:89] found id: ""
	I1205 07:46:44.501064  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.501073  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:44.501078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:44.501138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:44.525674  299667 cri.go:89] found id: ""
	I1205 07:46:44.525697  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.525705  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:44.525711  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:44.525769  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:44.554878  299667 cri.go:89] found id: ""
	I1205 07:46:44.554903  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.554911  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:44.554918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:44.554991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:44.579773  299667 cri.go:89] found id: ""
	I1205 07:46:44.579796  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.579805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:44.579811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:44.579867  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:44.611991  299667 cri.go:89] found id: ""
	I1205 07:46:44.612017  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.612042  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:44.612049  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:44.612108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:44.646395  299667 cri.go:89] found id: ""
	I1205 07:46:44.646418  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.646427  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:44.646433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:44.646499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:44.674148  299667 cri.go:89] found id: ""
	I1205 07:46:44.674170  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.674178  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:44.674187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:44.674199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.734427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:44.734469  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:44.748531  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:44.748561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:44.815565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:44.815586  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:44.815601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:44.841456  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:44.841492  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:46:45.103537  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:46.819177  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:46:46.909187  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:46.909286  297527 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:47.602297  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:49.602426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:45.648666  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:45.706769  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:45.706803  299667 retry.go:31] will retry after 32.901994647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:47.381509  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:47.392949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:47.393065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:47.424033  299667 cri.go:89] found id: ""
	I1205 07:46:47.424057  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.424066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:47.424072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:47.424140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:47.451239  299667 cri.go:89] found id: ""
	I1205 07:46:47.451265  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.451275  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:47.451282  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:47.451342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:47.475229  299667 cri.go:89] found id: ""
	I1205 07:46:47.475250  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.475259  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:47.475265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:47.475322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:47.500010  299667 cri.go:89] found id: ""
	I1205 07:46:47.500036  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.500045  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:47.500051  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:47.500110  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:47.525665  299667 cri.go:89] found id: ""
	I1205 07:46:47.525691  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.525700  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:47.525707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:47.525767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:47.550876  299667 cri.go:89] found id: ""
	I1205 07:46:47.550902  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.550911  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:47.550917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:47.550978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:47.574838  299667 cri.go:89] found id: ""
	I1205 07:46:47.574904  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.574926  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:47.574940  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:47.575018  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:47.606672  299667 cri.go:89] found id: ""
	I1205 07:46:47.606698  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.606707  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:47.606716  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:47.606728  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:47.644360  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:47.644388  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:47.706982  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:47.707019  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:47.720731  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:47.720759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:47.782357  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:47.782378  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:47.782393  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:51.603232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:52.633653  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:52.692000  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:52.692106  297527 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:54.102683  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:54.310076  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:54.372261  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:54.372370  297527 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:46:54.375412  297527 out.go:179] * Enabled addons: 
	I1205 07:46:54.378282  297527 addons.go:530] duration metric: took 1m42.739564939s for enable addons: enabled=[]
	I1205 07:46:50.307630  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:50.318086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:50.318159  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:50.342816  299667 cri.go:89] found id: ""
	I1205 07:46:50.342838  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.342847  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:50.342853  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:50.342921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:50.371375  299667 cri.go:89] found id: ""
	I1205 07:46:50.371440  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.371462  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:50.371478  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:50.371566  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:50.401098  299667 cri.go:89] found id: ""
	I1205 07:46:50.401206  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.401224  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:50.401245  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:50.401310  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:50.432101  299667 cri.go:89] found id: ""
	I1205 07:46:50.432134  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.432143  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:50.432149  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:50.432262  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:50.457371  299667 cri.go:89] found id: ""
	I1205 07:46:50.457396  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.457405  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:50.457413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:50.457469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:50.486796  299667 cri.go:89] found id: ""
	I1205 07:46:50.486821  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.486830  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:50.486836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:50.486945  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:50.515505  299667 cri.go:89] found id: ""
	I1205 07:46:50.515529  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.515537  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:50.515544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:50.515606  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:50.543462  299667 cri.go:89] found id: ""
	I1205 07:46:50.543486  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.543495  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:50.543503  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:50.543561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:50.600091  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:50.600276  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:50.619872  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:50.619944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:50.690141  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:50.690160  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:50.690173  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:50.715362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:50.715398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:53.244467  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:53.256174  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:53.256240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:53.279782  299667 cri.go:89] found id: ""
	I1205 07:46:53.279803  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.279810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:53.279817  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:53.279878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:53.303793  299667 cri.go:89] found id: ""
	I1205 07:46:53.303813  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.303821  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:53.303827  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:53.303884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:53.332886  299667 cri.go:89] found id: ""
	I1205 07:46:53.332908  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.332916  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:53.332922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:53.332981  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:53.359130  299667 cri.go:89] found id: ""
	I1205 07:46:53.359153  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.359161  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:53.359168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:53.359229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:53.384922  299667 cri.go:89] found id: ""
	I1205 07:46:53.384947  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.384966  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:53.384972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:53.385033  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:53.409882  299667 cri.go:89] found id: ""
	I1205 07:46:53.409903  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.409912  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:53.409918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:53.409982  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:53.435229  299667 cri.go:89] found id: ""
	I1205 07:46:53.435254  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.435263  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:53.435269  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:53.435326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:53.460378  299667 cri.go:89] found id: ""
	I1205 07:46:53.460402  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.460411  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:53.460419  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:53.460430  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:53.515653  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:53.515686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:53.529252  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:53.529277  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:53.590407  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:53.590427  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:53.590439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:53.615638  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:53.615670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:46:56.102997  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:58.602448  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:56.149491  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:56.160491  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:56.160560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:56.186032  299667 cri.go:89] found id: ""
	I1205 07:46:56.186055  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.186063  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:56.186069  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:56.186127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:56.210655  299667 cri.go:89] found id: ""
	I1205 07:46:56.210683  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.210691  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:56.210698  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:56.210760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:56.236968  299667 cri.go:89] found id: ""
	I1205 07:46:56.237039  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.237060  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:56.237078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:56.237197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:56.261470  299667 cri.go:89] found id: ""
	I1205 07:46:56.261543  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.261559  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:56.261567  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:56.261626  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:56.287544  299667 cri.go:89] found id: ""
	I1205 07:46:56.287569  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.287578  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:56.287586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:56.287664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:56.313083  299667 cri.go:89] found id: ""
	I1205 07:46:56.313154  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.313200  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:56.313222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:56.313290  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:56.338841  299667 cri.go:89] found id: ""
	I1205 07:46:56.338865  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.338879  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:56.338886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:56.338971  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:56.364821  299667 cri.go:89] found id: ""
	I1205 07:46:56.364883  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.364906  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:56.364927  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:56.364953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:56.421380  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:56.421412  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:56.434797  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:56.434825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:56.500557  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:56.500579  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:56.500592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:56.525423  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:56.525453  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.059925  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:59.070350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:59.070417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:59.106211  299667 cri.go:89] found id: ""
	I1205 07:46:59.106234  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.106242  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:59.106250  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:59.106308  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:59.134075  299667 cri.go:89] found id: ""
	I1205 07:46:59.134101  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.134110  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:59.134116  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:59.134173  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:59.163091  299667 cri.go:89] found id: ""
	I1205 07:46:59.163119  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.163128  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:59.163134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:59.163195  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:59.189283  299667 cri.go:89] found id: ""
	I1205 07:46:59.189308  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.189316  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:59.189323  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:59.189384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:59.214391  299667 cri.go:89] found id: ""
	I1205 07:46:59.214416  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.214433  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:59.214439  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:59.214498  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:59.246223  299667 cri.go:89] found id: ""
	I1205 07:46:59.246246  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.246255  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:59.246262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:59.246321  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:59.274955  299667 cri.go:89] found id: ""
	I1205 07:46:59.274991  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.274999  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:59.275006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:59.275074  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:59.302932  299667 cri.go:89] found id: ""
	I1205 07:46:59.302956  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.302965  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:59.302984  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:59.302997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:59.362548  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:59.362571  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:59.362583  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:59.387053  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:59.387085  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.413739  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:59.413767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:59.469532  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:59.469569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:00.602658  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:03.102385  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:01.983455  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:01.994190  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:01.994316  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:02.023883  299667 cri.go:89] found id: ""
	I1205 07:47:02.023913  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.023922  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:02.023929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:02.023992  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:02.050293  299667 cri.go:89] found id: ""
	I1205 07:47:02.050367  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.050383  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:02.050390  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:02.050458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:02.076131  299667 cri.go:89] found id: ""
	I1205 07:47:02.076157  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.076166  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:02.076172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:02.076235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:02.115590  299667 cri.go:89] found id: ""
	I1205 07:47:02.115623  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.115632  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:02.115638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:02.115733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:02.155255  299667 cri.go:89] found id: ""
	I1205 07:47:02.155281  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.155290  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:02.155297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:02.155355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:02.184142  299667 cri.go:89] found id: ""
	I1205 07:47:02.184169  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.184178  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:02.184185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:02.184244  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:02.208969  299667 cri.go:89] found id: ""
	I1205 07:47:02.208997  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.209006  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:02.209036  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:02.209126  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:02.233523  299667 cri.go:89] found id: ""
	I1205 07:47:02.233556  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.233565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:02.233597  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:02.233609  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:02.289818  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:02.289852  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:02.303686  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:02.303756  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:02.370663  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:02.370711  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:02.370723  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:02.395466  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:02.395508  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:04.925546  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:04.937771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:04.937866  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:04.967009  299667 cri.go:89] found id: ""
	I1205 07:47:04.967031  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.967039  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:04.967046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:04.967103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:04.998327  299667 cri.go:89] found id: ""
	I1205 07:47:04.998351  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.998360  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:04.998365  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:04.998426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:05.026478  299667 cri.go:89] found id: ""
	I1205 07:47:05.026505  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.026513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:05.026521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:05.026583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:05.051556  299667 cri.go:89] found id: ""
	I1205 07:47:05.051580  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.051588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:05.051595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:05.051658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:05.078546  299667 cri.go:89] found id: ""
	I1205 07:47:05.078570  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.078579  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:05.078585  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:05.078649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1205 07:47:05.102744  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:07.602359  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:09.603452  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:05.107928  299667 cri.go:89] found id: ""
	I1205 07:47:05.107955  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.107964  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:05.107971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:05.108035  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:05.134695  299667 cri.go:89] found id: ""
	I1205 07:47:05.134718  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.134727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:05.134733  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:05.134792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:05.160991  299667 cri.go:89] found id: ""
	I1205 07:47:05.161017  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.161025  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:05.161035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:05.161048  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:05.211053  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:47:05.219354  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:05.219426  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:05.274067  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:05.274165  299667 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:05.274831  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:05.274851  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:05.336443  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:05.336473  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:05.336486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:05.361343  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:05.361374  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:07.887800  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:07.899185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:07.899259  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:07.927401  299667 cri.go:89] found id: ""
	I1205 07:47:07.927423  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.927431  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:07.927437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:07.927511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:07.958986  299667 cri.go:89] found id: ""
	I1205 07:47:07.959008  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.959017  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:07.959023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:07.959081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:07.986953  299667 cri.go:89] found id: ""
	I1205 07:47:07.986974  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.986983  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:07.986989  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:07.987052  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:08.013548  299667 cri.go:89] found id: ""
	I1205 07:47:08.013573  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.013581  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:08.013590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:08.013654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:08.039626  299667 cri.go:89] found id: ""
	I1205 07:47:08.039650  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.039658  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:08.039664  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:08.039724  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:08.064448  299667 cri.go:89] found id: ""
	I1205 07:47:08.064472  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.064482  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:08.064489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:08.064548  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:08.089144  299667 cri.go:89] found id: ""
	I1205 07:47:08.089234  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.089250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:08.089257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:08.089325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:08.124837  299667 cri.go:89] found id: ""
	I1205 07:47:08.124863  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.124890  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:08.124900  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:08.124917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:08.155028  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:08.155055  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:08.215310  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:08.215346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:08.229549  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:08.229577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:08.292266  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:08.292296  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:08.292309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:08.394608  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:47:08.457975  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:08.458074  299667 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:47:12.102433  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:14.102787  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:10.816831  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:10.827471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:10.827537  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:10.856590  299667 cri.go:89] found id: ""
	I1205 07:47:10.856612  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.856621  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:10.856626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:10.856687  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:10.887186  299667 cri.go:89] found id: ""
	I1205 07:47:10.887207  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.887215  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:10.887221  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:10.887279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:10.914460  299667 cri.go:89] found id: ""
	I1205 07:47:10.914482  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.914490  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:10.914497  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:10.914554  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:10.943070  299667 cri.go:89] found id: ""
	I1205 07:47:10.943095  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.943103  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:10.943109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:10.943167  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:10.967007  299667 cri.go:89] found id: ""
	I1205 07:47:10.967034  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.967043  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:10.967050  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:10.967142  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:10.990367  299667 cri.go:89] found id: ""
	I1205 07:47:10.990394  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.990402  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:10.990408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:10.990465  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:11.021515  299667 cri.go:89] found id: ""
	I1205 07:47:11.021538  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.021547  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:11.021553  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:11.021616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:11.046137  299667 cri.go:89] found id: ""
	I1205 07:47:11.046159  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.046168  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:11.046176  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:11.046190  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:11.071756  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:11.071787  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:11.101757  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:11.101784  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:11.175924  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:11.175962  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:11.190392  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:11.190424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:11.252655  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:13.753819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:13.764287  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:13.764373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:13.790393  299667 cri.go:89] found id: ""
	I1205 07:47:13.790418  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.790426  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:13.790433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:13.790496  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:13.814911  299667 cri.go:89] found id: ""
	I1205 07:47:13.814935  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.814944  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:13.814951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:13.815007  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:13.839756  299667 cri.go:89] found id: ""
	I1205 07:47:13.839779  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.839787  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:13.839794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:13.839852  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:13.870908  299667 cri.go:89] found id: ""
	I1205 07:47:13.870933  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.870943  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:13.870949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:13.871010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:13.902182  299667 cri.go:89] found id: ""
	I1205 07:47:13.902208  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.902216  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:13.902223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:13.902281  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:13.928077  299667 cri.go:89] found id: ""
	I1205 07:47:13.928102  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.928111  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:13.928117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:13.928174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:13.952673  299667 cri.go:89] found id: ""
	I1205 07:47:13.952706  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.952715  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:13.952721  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:13.952786  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:13.982104  299667 cri.go:89] found id: ""
	I1205 07:47:13.982137  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.982147  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:13.982156  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:13.982168  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:14.047894  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:14.047925  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:14.061830  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:14.061861  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:14.145569  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:14.145587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:14.145601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:14.173369  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:14.173406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:16.701890  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:16.712471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:16.712541  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:16.737364  299667 cri.go:89] found id: ""
	I1205 07:47:16.737386  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.737394  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:16.737400  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:16.737458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:16.761826  299667 cri.go:89] found id: ""
	I1205 07:47:16.761849  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.761858  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:16.761864  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:16.761921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:16.787321  299667 cri.go:89] found id: ""
	I1205 07:47:16.787343  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.787352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:16.787359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:16.787419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:16.812059  299667 cri.go:89] found id: ""
	I1205 07:47:16.812080  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.812087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:16.812094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:16.812152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:16.835710  299667 cri.go:89] found id: ""
	I1205 07:47:16.835731  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.835739  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:16.835745  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:16.835804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:16.866817  299667 cri.go:89] found id: ""
	I1205 07:47:16.866839  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.866848  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:16.866854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:16.866915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:16.892855  299667 cri.go:89] found id: ""
	I1205 07:47:16.892877  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.892885  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:16.892891  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:16.892948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:16.921328  299667 cri.go:89] found id: ""
	I1205 07:47:16.921348  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.921356  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:16.921365  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:16.921378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:16.975810  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:16.975843  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:16.989559  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:16.989589  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:17.052011  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:17.052031  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:17.052044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:17.076823  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:17.076853  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:18.609402  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:47:18.686960  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:18.687059  299667 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:18.690290  299667 out.go:179] * Enabled addons: 
	W1205 07:47:16.602616  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:19.102330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:18.693172  299667 addons.go:530] duration metric: took 1m46.271465904s for enable addons: enabled=[]
	I1205 07:47:19.612423  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:19.623124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:19.623194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:19.651237  299667 cri.go:89] found id: ""
	I1205 07:47:19.651260  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.651268  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:19.651276  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:19.651338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:19.679760  299667 cri.go:89] found id: ""
	I1205 07:47:19.679781  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.679790  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:19.679795  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:19.679854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:19.703620  299667 cri.go:89] found id: ""
	I1205 07:47:19.703640  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.703652  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:19.703658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:19.703731  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:19.727543  299667 cri.go:89] found id: ""
	I1205 07:47:19.727607  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.727629  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:19.727645  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:19.727736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:19.751580  299667 cri.go:89] found id: ""
	I1205 07:47:19.751606  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.751614  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:19.751620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:19.751678  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:19.778033  299667 cri.go:89] found id: ""
	I1205 07:47:19.778058  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.778066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:19.778074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:19.778130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:19.805321  299667 cri.go:89] found id: ""
	I1205 07:47:19.805346  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.805354  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:19.805360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:19.805419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:19.828911  299667 cri.go:89] found id: ""
	I1205 07:47:19.828932  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.828940  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:19.828949  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:19.828961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:19.842046  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:19.842072  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:19.924477  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:19.924542  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:19.924568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:19.949241  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:19.949279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:19.977260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:19.977287  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:47:21.102389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:23.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:22.534572  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:22.545193  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:22.545272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:22.570057  299667 cri.go:89] found id: ""
	I1205 07:47:22.570083  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.570092  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:22.570098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:22.570163  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:22.595296  299667 cri.go:89] found id: ""
	I1205 07:47:22.595321  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.595330  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:22.595337  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:22.595421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:22.620283  299667 cri.go:89] found id: ""
	I1205 07:47:22.620307  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.620315  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:22.620322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:22.620399  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:22.644353  299667 cri.go:89] found id: ""
	I1205 07:47:22.644379  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.644389  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:22.644395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:22.644474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:22.674856  299667 cri.go:89] found id: ""
	I1205 07:47:22.674885  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.674894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:22.674900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:22.674980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:22.699975  299667 cri.go:89] found id: ""
	I1205 07:47:22.700002  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.700011  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:22.700018  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:22.700089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:22.725706  299667 cri.go:89] found id: ""
	I1205 07:47:22.725734  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.725743  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:22.725753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:22.725822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:22.750409  299667 cri.go:89] found id: ""
	I1205 07:47:22.750430  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.750439  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:22.750459  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:22.750471  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:22.775719  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:22.775754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:22.806148  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:22.806175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:22.863750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:22.863786  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:22.878145  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:22.878174  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:22.945284  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:47:25.602789  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:28.102396  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:25.446099  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:25.457267  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:25.457345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:25.484246  299667 cri.go:89] found id: ""
	I1205 07:47:25.484273  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.484282  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:25.484289  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:25.484346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:25.513783  299667 cri.go:89] found id: ""
	I1205 07:47:25.513806  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.513815  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:25.513821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:25.513895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:25.542603  299667 cri.go:89] found id: ""
	I1205 07:47:25.542627  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.542636  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:25.542642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:25.542768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:25.566393  299667 cri.go:89] found id: ""
	I1205 07:47:25.566417  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.566427  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:25.566433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:25.566510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:25.591113  299667 cri.go:89] found id: ""
	I1205 07:47:25.591148  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.591157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:25.591164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:25.591237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:25.619895  299667 cri.go:89] found id: ""
	I1205 07:47:25.619919  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.619928  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:25.619935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:25.619991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:25.645287  299667 cri.go:89] found id: ""
	I1205 07:47:25.645311  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.645319  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:25.645326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:25.645386  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:25.670944  299667 cri.go:89] found id: ""
	I1205 07:47:25.670967  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.670975  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:25.671025  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:25.671043  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:25.728687  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:25.728721  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:25.743347  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:25.743373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:25.808046  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:25.808069  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:25.808082  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:25.833265  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:25.833298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:28.366360  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:28.378460  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:28.378539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:28.413651  299667 cri.go:89] found id: ""
	I1205 07:47:28.413678  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.413687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:28.413694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:28.413755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:28.439196  299667 cri.go:89] found id: ""
	I1205 07:47:28.439223  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.439232  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:28.439238  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:28.439323  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:28.463516  299667 cri.go:89] found id: ""
	I1205 07:47:28.463587  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.463610  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:28.463628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:28.463709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:28.489425  299667 cri.go:89] found id: ""
	I1205 07:47:28.489450  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.489459  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:28.489467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:28.489560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:28.516772  299667 cri.go:89] found id: ""
	I1205 07:47:28.516797  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.516806  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:28.516812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:28.516872  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:28.543466  299667 cri.go:89] found id: ""
	I1205 07:47:28.543490  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.543498  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:28.543507  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:28.543564  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:28.568431  299667 cri.go:89] found id: ""
	I1205 07:47:28.568455  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.568463  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:28.568469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:28.568528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:28.593549  299667 cri.go:89] found id: ""
	I1205 07:47:28.593573  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.593581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:28.593590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:28.593601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:28.652330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:28.652364  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:28.665857  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:28.665882  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:28.733864  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:28.733886  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:28.733898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:28.758935  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:28.758971  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:30.102577  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:32.602389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:34.602704  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:31.286625  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:31.297007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:31.297075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:31.324486  299667 cri.go:89] found id: ""
	I1205 07:47:31.324508  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.324517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:31.324523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:31.324585  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:31.367211  299667 cri.go:89] found id: ""
	I1205 07:47:31.367234  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.367242  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:31.367249  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:31.367336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:31.398063  299667 cri.go:89] found id: ""
	I1205 07:47:31.398124  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.398148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:31.398166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:31.398239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:31.430255  299667 cri.go:89] found id: ""
	I1205 07:47:31.430280  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.430288  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:31.430303  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:31.430362  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:31.455188  299667 cri.go:89] found id: ""
	I1205 07:47:31.455213  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.455222  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:31.455228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:31.455304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:31.483709  299667 cri.go:89] found id: ""
	I1205 07:47:31.483734  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.483743  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:31.483754  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:31.483841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:31.511054  299667 cri.go:89] found id: ""
	I1205 07:47:31.511081  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.511090  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:31.511096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:31.511154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:31.536168  299667 cri.go:89] found id: ""
	I1205 07:47:31.536193  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.536202  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:31.536211  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:31.536222  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:31.592031  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:31.592066  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:31.606480  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:31.606506  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:31.673271  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:31.673294  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:31.673309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:31.699030  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:31.699063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:34.230473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:34.241086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:34.241182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:34.266354  299667 cri.go:89] found id: ""
	I1205 07:47:34.266377  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.266386  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:34.266393  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:34.266455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:34.295281  299667 cri.go:89] found id: ""
	I1205 07:47:34.295304  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.295313  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:34.295322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:34.295381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:34.320096  299667 cri.go:89] found id: ""
	I1205 07:47:34.320119  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.320127  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:34.320134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:34.320193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:34.351699  299667 cri.go:89] found id: ""
	I1205 07:47:34.351769  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.351778  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:34.351785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:34.351890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:34.384621  299667 cri.go:89] found id: ""
	I1205 07:47:34.384643  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.384651  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:34.384658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:34.384716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:34.416183  299667 cri.go:89] found id: ""
	I1205 07:47:34.416209  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.416217  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:34.416225  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:34.416303  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:34.442818  299667 cri.go:89] found id: ""
	I1205 07:47:34.442843  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.442852  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:34.442859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:34.442926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:34.467574  299667 cri.go:89] found id: ""
	I1205 07:47:34.467600  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.467608  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:34.467618  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:34.467630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:34.525566  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:34.525599  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:34.538971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:34.539003  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:34.603104  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:34.603123  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:34.603135  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:34.627990  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:34.628024  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:37.102277  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:39.102399  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:37.156741  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:37.168917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:37.168986  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:37.194896  299667 cri.go:89] found id: ""
	I1205 07:47:37.194920  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.194929  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:37.194935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:37.194996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:37.220279  299667 cri.go:89] found id: ""
	I1205 07:47:37.220316  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.220324  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:37.220331  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:37.220402  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:37.244728  299667 cri.go:89] found id: ""
	I1205 07:47:37.244759  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.244768  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:37.244774  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:37.244838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:37.269770  299667 cri.go:89] found id: ""
	I1205 07:47:37.269794  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.269802  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:37.269809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:37.269865  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:37.296343  299667 cri.go:89] found id: ""
	I1205 07:47:37.296367  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.296376  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:37.296382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:37.296444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:37.321553  299667 cri.go:89] found id: ""
	I1205 07:47:37.321576  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.321585  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:37.321592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:37.321651  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:37.356802  299667 cri.go:89] found id: ""
	I1205 07:47:37.356824  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.356834  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:37.356841  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:37.356901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:37.384475  299667 cri.go:89] found id: ""
	I1205 07:47:37.384497  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.384505  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:37.384513  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:37.384524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:37.451184  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:37.451220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:37.465508  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:37.465535  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:37.531461  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:37.531483  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:37.531495  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:37.556492  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:37.556531  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.084953  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:47:41.103193  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:43.602434  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:40.099166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:40.099240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:40.129037  299667 cri.go:89] found id: ""
	I1205 07:47:40.129058  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.129066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:40.129074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:40.129147  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:40.166711  299667 cri.go:89] found id: ""
	I1205 07:47:40.166735  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.166743  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:40.166752  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:40.166813  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:40.192959  299667 cri.go:89] found id: ""
	I1205 07:47:40.192982  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.192991  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:40.192998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:40.193056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:40.218168  299667 cri.go:89] found id: ""
	I1205 07:47:40.218193  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.218202  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:40.218208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:40.218292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:40.243397  299667 cri.go:89] found id: ""
	I1205 07:47:40.243420  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.243428  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:40.243435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:40.243510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:40.268685  299667 cri.go:89] found id: ""
	I1205 07:47:40.268710  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.268718  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:40.268725  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:40.268802  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:40.294417  299667 cri.go:89] found id: ""
	I1205 07:47:40.294443  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.294452  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:40.294480  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:40.294561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:40.321495  299667 cri.go:89] found id: ""
	I1205 07:47:40.321556  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.321570  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:40.321580  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:40.321592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.360106  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:40.360133  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:40.420594  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:40.420627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:40.437302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:40.437332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:40.503821  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:40.503843  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:40.503855  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.028974  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:43.039847  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:43.039922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:43.066179  299667 cri.go:89] found id: ""
	I1205 07:47:43.066202  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.066210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:43.066216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:43.066274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:43.092504  299667 cri.go:89] found id: ""
	I1205 07:47:43.092528  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.092536  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:43.092543  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:43.092610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:43.124060  299667 cri.go:89] found id: ""
	I1205 07:47:43.124086  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.124095  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:43.124102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:43.124166  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:43.154063  299667 cri.go:89] found id: ""
	I1205 07:47:43.154089  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.154098  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:43.154104  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:43.154174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:43.185231  299667 cri.go:89] found id: ""
	I1205 07:47:43.185255  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.185264  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:43.185271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:43.185334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:43.214039  299667 cri.go:89] found id: ""
	I1205 07:47:43.214113  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.214135  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:43.214153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:43.214239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:43.239645  299667 cri.go:89] found id: ""
	I1205 07:47:43.239709  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.239730  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:43.239747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:43.239836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:43.264373  299667 cri.go:89] found id: ""
	I1205 07:47:43.264437  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.264458  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:43.264478  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:43.264514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:43.320427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:43.320464  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:43.334556  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:43.334586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:43.419578  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:43.419600  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:43.419613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.444937  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:43.444974  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:45.602606  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:48.102422  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:45.973125  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:45.983741  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:45.983836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:46.021150  299667 cri.go:89] found id: ""
	I1205 07:47:46.021200  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.021208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:46.021215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:46.021296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:46.046658  299667 cri.go:89] found id: ""
	I1205 07:47:46.046688  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.046725  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:46.046732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:46.046806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:46.072039  299667 cri.go:89] found id: ""
	I1205 07:47:46.072113  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.072136  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:46.072153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:46.072239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:46.117323  299667 cri.go:89] found id: ""
	I1205 07:47:46.117399  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.117423  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:46.117448  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:46.117538  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:46.154886  299667 cri.go:89] found id: ""
	I1205 07:47:46.154912  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.154921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:46.154928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:46.155012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:46.181153  299667 cri.go:89] found id: ""
	I1205 07:47:46.181199  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.181208  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:46.181215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:46.181302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:46.211244  299667 cri.go:89] found id: ""
	I1205 07:47:46.211270  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.211279  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:46.211285  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:46.211346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:46.235089  299667 cri.go:89] found id: ""
	I1205 07:47:46.235164  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.235180  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:46.235191  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:46.235203  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:46.305530  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:46.305551  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:46.305563  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:46.330757  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:46.330792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:46.376750  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:46.376781  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:46.439507  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:46.439542  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:48.953904  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:48.964561  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:48.964628  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:48.987874  299667 cri.go:89] found id: ""
	I1205 07:47:48.987900  299667 logs.go:282] 0 containers: []
	W1205 07:47:48.987909  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:48.987916  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:48.987974  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:49.014890  299667 cri.go:89] found id: ""
	I1205 07:47:49.014966  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.014980  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:49.014988  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:49.015065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:49.040290  299667 cri.go:89] found id: ""
	I1205 07:47:49.040313  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.040321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:49.040328  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:49.040385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:49.065216  299667 cri.go:89] found id: ""
	I1205 07:47:49.065278  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.065287  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:49.065293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:49.065350  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:49.091916  299667 cri.go:89] found id: ""
	I1205 07:47:49.091941  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.091950  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:49.091956  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:49.092015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:49.122078  299667 cri.go:89] found id: ""
	I1205 07:47:49.122101  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.122110  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:49.122117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:49.122174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:49.148378  299667 cri.go:89] found id: ""
	I1205 07:47:49.148400  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.148409  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:49.148415  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:49.148474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:49.181597  299667 cri.go:89] found id: ""
	I1205 07:47:49.181623  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.181639  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:49.181649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:49.181660  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:49.237429  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:49.237462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:49.252514  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:49.252540  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:49.317886  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:49.317908  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:49.317922  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:49.343471  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:49.343503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:50.103132  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:52.602329  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:51.885282  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:51.895713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:51.895806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:51.923558  299667 cri.go:89] found id: ""
	I1205 07:47:51.923582  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.923592  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:51.923599  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:51.923702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:51.952466  299667 cri.go:89] found id: ""
	I1205 07:47:51.952490  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.952499  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:51.952506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:51.952594  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:51.977008  299667 cri.go:89] found id: ""
	I1205 07:47:51.977032  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.977041  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:51.977048  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:51.977130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:52.001855  299667 cri.go:89] found id: ""
	I1205 07:47:52.001880  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.001890  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:52.001918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:52.002010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:52.041299  299667 cri.go:89] found id: ""
	I1205 07:47:52.041367  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.041391  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:52.041410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:52.041490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:52.066425  299667 cri.go:89] found id: ""
	I1205 07:47:52.066448  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.066457  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:52.066484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:52.066567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:52.093389  299667 cri.go:89] found id: ""
	I1205 07:47:52.093415  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.093425  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:52.093431  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:52.093490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:52.131379  299667 cri.go:89] found id: ""
	I1205 07:47:52.131404  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.131412  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:52.131421  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:52.131432  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:52.172215  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:52.172246  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:52.232285  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:52.232317  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:52.246383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:52.246461  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:52.312938  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:52.312999  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:52.313037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:54.839218  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:54.849526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:54.849596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:54.878984  299667 cri.go:89] found id: ""
	I1205 07:47:54.879018  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.879028  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:54.879034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:54.879115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:54.903570  299667 cri.go:89] found id: ""
	I1205 07:47:54.903593  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.903603  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:54.903609  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:54.903668  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:54.928679  299667 cri.go:89] found id: ""
	I1205 07:47:54.928701  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.928710  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:54.928716  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:54.928772  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:54.957443  299667 cri.go:89] found id: ""
	I1205 07:47:54.957465  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.957474  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:54.957481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:54.957539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:54.981997  299667 cri.go:89] found id: ""
	I1205 07:47:54.982022  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.982031  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:54.982037  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:54.982097  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:55.019658  299667 cri.go:89] found id: ""
	I1205 07:47:55.019684  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.019694  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:55.019702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:55.019774  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:55.045945  299667 cri.go:89] found id: ""
	I1205 07:47:55.045968  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.045977  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:55.045982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:55.046047  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:55.070660  299667 cri.go:89] found id: ""
	I1205 07:47:55.070682  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.070691  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:55.070753  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:55.070772  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:55.103139  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:57.602889  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:55.155877  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:55.155904  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:55.155918  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:55.182506  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:55.182538  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:55.209519  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:55.209545  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:55.268283  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:55.268315  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:57.781956  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:57.792419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:57.792511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:57.816805  299667 cri.go:89] found id: ""
	I1205 07:47:57.816830  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.816839  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:57.816845  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:57.816907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:57.844943  299667 cri.go:89] found id: ""
	I1205 07:47:57.844967  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.844975  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:57.844982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:57.845041  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:57.869698  299667 cri.go:89] found id: ""
	I1205 07:47:57.869720  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.869728  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:57.869735  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:57.869792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:57.894855  299667 cri.go:89] found id: ""
	I1205 07:47:57.894881  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.894889  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:57.894896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:57.895015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:57.919181  299667 cri.go:89] found id: ""
	I1205 07:47:57.919207  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.919217  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:57.919223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:57.919284  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:57.947523  299667 cri.go:89] found id: ""
	I1205 07:47:57.947545  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.947553  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:57.947559  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:57.947617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:57.972190  299667 cri.go:89] found id: ""
	I1205 07:47:57.972212  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.972221  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:57.972227  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:57.972337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:57.995598  299667 cri.go:89] found id: ""
	I1205 07:47:57.995620  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.995628  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:57.995637  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:57.995648  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:58.053180  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:58.053214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:58.066958  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:58.067035  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:58.148853  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:58.148871  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:58.148884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:58.177078  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:58.177111  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:00.102486  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:02.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:04.602418  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:00.709764  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:00.720636  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:00.720709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:00.745332  299667 cri.go:89] found id: ""
	I1205 07:48:00.745357  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.745367  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:00.745377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:00.745446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:00.769743  299667 cri.go:89] found id: ""
	I1205 07:48:00.769766  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.769774  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:00.769780  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:00.769838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:00.793723  299667 cri.go:89] found id: ""
	I1205 07:48:00.793747  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.793755  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:00.793761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:00.793849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:00.822270  299667 cri.go:89] found id: ""
	I1205 07:48:00.822295  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.822304  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:00.822311  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:00.822372  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:00.846055  299667 cri.go:89] found id: ""
	I1205 07:48:00.846079  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.846088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:00.846094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:00.846154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:00.875896  299667 cri.go:89] found id: ""
	I1205 07:48:00.875927  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.875938  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:00.875945  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:00.876005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:00.901376  299667 cri.go:89] found id: ""
	I1205 07:48:00.901401  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.901410  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:00.901417  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:00.901478  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:00.931038  299667 cri.go:89] found id: ""
	I1205 07:48:00.931062  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.931070  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:00.931080  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:00.931121  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:00.997183  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:00.997205  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:00.997217  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:01.023514  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:01.023552  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:01.051665  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:01.051694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:01.112451  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:01.112528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:03.628641  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:03.640043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:03.640115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:03.668895  299667 cri.go:89] found id: ""
	I1205 07:48:03.668923  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.668932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:03.668939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:03.669005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:03.698851  299667 cri.go:89] found id: ""
	I1205 07:48:03.698873  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.698882  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:03.698888  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:03.698946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:03.724736  299667 cri.go:89] found id: ""
	I1205 07:48:03.724758  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.724767  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:03.724773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:03.724831  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:03.751007  299667 cri.go:89] found id: ""
	I1205 07:48:03.751030  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.751038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:03.751072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:03.751143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:03.779130  299667 cri.go:89] found id: ""
	I1205 07:48:03.779153  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.779162  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:03.779168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:03.779226  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:03.808717  299667 cri.go:89] found id: ""
	I1205 07:48:03.808738  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.808798  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:03.808812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:03.808893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:03.834648  299667 cri.go:89] found id: ""
	I1205 07:48:03.834745  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.834769  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:03.834790  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:03.834894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:03.860266  299667 cri.go:89] found id: ""
	I1205 07:48:03.860290  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.860298  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:03.860307  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:03.860326  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:03.925650  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:03.925672  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:03.925684  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:03.951836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:03.951866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:03.981147  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:03.981199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:04.037271  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:04.037308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:48:07.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:09.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:06.551820  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:06.562850  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:06.562922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:06.588022  299667 cri.go:89] found id: ""
	I1205 07:48:06.588044  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.588052  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:06.588059  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:06.588121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:06.618654  299667 cri.go:89] found id: ""
	I1205 07:48:06.618677  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.618687  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:06.618693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:06.618760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:06.654167  299667 cri.go:89] found id: ""
	I1205 07:48:06.654188  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.654197  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:06.654203  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:06.654261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:06.681234  299667 cri.go:89] found id: ""
	I1205 07:48:06.681306  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.681327  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:06.681345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:06.681437  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:06.705922  299667 cri.go:89] found id: ""
	I1205 07:48:06.705946  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.705955  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:06.705962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:06.706044  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:06.730881  299667 cri.go:89] found id: ""
	I1205 07:48:06.730913  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.730924  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:06.730930  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:06.730987  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:06.755636  299667 cri.go:89] found id: ""
	I1205 07:48:06.755661  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.755670  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:06.755676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:06.755743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:06.780702  299667 cri.go:89] found id: ""
	I1205 07:48:06.780735  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.780743  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:06.780753  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:06.780764  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:06.841265  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:06.841303  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:06.854661  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:06.854686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:06.918298  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:06.918316  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:06.918328  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:06.943239  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:06.943274  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.471658  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:09.482526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:09.482598  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:09.507658  299667 cri.go:89] found id: ""
	I1205 07:48:09.507683  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.507692  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:09.507699  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:09.507765  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:09.538688  299667 cri.go:89] found id: ""
	I1205 07:48:09.538744  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.538758  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:09.538765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:09.538835  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:09.564016  299667 cri.go:89] found id: ""
	I1205 07:48:09.564041  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.564050  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:09.564056  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:09.564118  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:09.595020  299667 cri.go:89] found id: ""
	I1205 07:48:09.595047  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.595056  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:09.595062  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:09.595170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:09.627725  299667 cri.go:89] found id: ""
	I1205 07:48:09.627747  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.627756  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:09.627763  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:09.627821  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:09.661208  299667 cri.go:89] found id: ""
	I1205 07:48:09.661273  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.661290  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:09.661297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:09.661371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:09.686173  299667 cri.go:89] found id: ""
	I1205 07:48:09.686207  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.686216  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:09.686223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:09.686291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:09.710385  299667 cri.go:89] found id: ""
	I1205 07:48:09.710417  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.710426  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:09.710435  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:09.710447  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:09.724065  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:09.724089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:09.786352  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:09.786371  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:09.786383  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:09.814782  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:09.814823  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.845678  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:09.845705  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:11.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:14.102692  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:12.403586  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:12.414137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:12.414208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:12.443644  299667 cri.go:89] found id: ""
	I1205 07:48:12.443666  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.443677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:12.443683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:12.443743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:12.468970  299667 cri.go:89] found id: ""
	I1205 07:48:12.468992  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.469001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:12.469007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:12.469073  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:12.495420  299667 cri.go:89] found id: ""
	I1205 07:48:12.495441  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.495449  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:12.495455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:12.495513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:12.520821  299667 cri.go:89] found id: ""
	I1205 07:48:12.520848  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.520857  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:12.520862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:12.520920  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:12.546738  299667 cri.go:89] found id: ""
	I1205 07:48:12.546767  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.546776  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:12.546782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:12.546845  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:12.571663  299667 cri.go:89] found id: ""
	I1205 07:48:12.571687  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.571696  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:12.571702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:12.571759  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:12.600237  299667 cri.go:89] found id: ""
	I1205 07:48:12.600263  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.600272  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:12.600279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:12.600336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:12.645073  299667 cri.go:89] found id: ""
	I1205 07:48:12.645108  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.645116  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:12.645126  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:12.645137  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:12.661987  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:12.662020  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:12.726418  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:12.726442  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:12.726455  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:12.751208  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:12.751243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:12.780690  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:12.780718  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:16.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:18.602693  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:15.336959  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:15.349150  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:15.349233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:15.379055  299667 cri.go:89] found id: ""
	I1205 07:48:15.379075  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.379084  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:15.379090  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:15.379148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:15.411812  299667 cri.go:89] found id: ""
	I1205 07:48:15.411832  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.411841  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:15.411849  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:15.411907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:15.436056  299667 cri.go:89] found id: ""
	I1205 07:48:15.436077  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.436085  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:15.436091  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:15.436152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:15.461323  299667 cri.go:89] found id: ""
	I1205 07:48:15.461345  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.461354  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:15.461360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:15.461416  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:15.490552  299667 cri.go:89] found id: ""
	I1205 07:48:15.490577  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.490586  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:15.490593  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:15.490682  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:15.519448  299667 cri.go:89] found id: ""
	I1205 07:48:15.519471  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.519480  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:15.519487  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:15.519544  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:15.548923  299667 cri.go:89] found id: ""
	I1205 07:48:15.548947  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.548956  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:15.548962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:15.549024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:15.574804  299667 cri.go:89] found id: ""
	I1205 07:48:15.574828  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.574839  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:15.574847  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:15.574878  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:15.634392  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:15.634428  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:15.651971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:15.651998  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:15.719384  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:15.719407  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:15.719418  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:15.743909  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:15.743941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.273819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:18.284902  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:18.284975  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:18.310770  299667 cri.go:89] found id: ""
	I1205 07:48:18.310793  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.310802  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:18.310809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:18.310868  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:18.335509  299667 cri.go:89] found id: ""
	I1205 07:48:18.335530  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.335538  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:18.335544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:18.335602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:18.367849  299667 cri.go:89] found id: ""
	I1205 07:48:18.367875  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.367884  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:18.367890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:18.367947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:18.397008  299667 cri.go:89] found id: ""
	I1205 07:48:18.397037  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.397046  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:18.397053  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:18.397115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:18.422994  299667 cri.go:89] found id: ""
	I1205 07:48:18.423017  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.423035  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:18.423043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:18.423109  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:18.447590  299667 cri.go:89] found id: ""
	I1205 07:48:18.447666  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.447689  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:18.447713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:18.447801  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:18.472279  299667 cri.go:89] found id: ""
	I1205 07:48:18.472353  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.472375  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:18.472392  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:18.472477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:18.497432  299667 cri.go:89] found id: ""
	I1205 07:48:18.497454  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.497463  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:18.497471  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:18.497484  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:18.522163  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:18.522196  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.550354  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:18.550378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:18.605871  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:18.605944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:18.623406  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:18.623435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:18.692830  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:20.603254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:23.103214  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:21.193117  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:21.203367  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:21.203430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:21.228233  299667 cri.go:89] found id: ""
	I1205 07:48:21.228257  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.228265  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:21.228272  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:21.228331  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:21.256427  299667 cri.go:89] found id: ""
	I1205 07:48:21.256448  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.256456  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:21.256462  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:21.256523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:21.281113  299667 cri.go:89] found id: ""
	I1205 07:48:21.281136  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.281145  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:21.281151  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:21.281238  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:21.305777  299667 cri.go:89] found id: ""
	I1205 07:48:21.305798  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.305806  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:21.305812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:21.305869  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:21.335558  299667 cri.go:89] found id: ""
	I1205 07:48:21.335622  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.335645  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:21.335662  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:21.335745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:21.374161  299667 cri.go:89] found id: ""
	I1205 07:48:21.374230  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.374257  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:21.374275  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:21.374358  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:21.403378  299667 cri.go:89] found id: ""
	I1205 07:48:21.403442  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.403464  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:21.403481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:21.403561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:21.428681  299667 cri.go:89] found id: ""
	I1205 07:48:21.428707  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.428717  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:21.428725  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:21.428736  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:21.485472  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:21.485503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:21.499440  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:21.499521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:21.564057  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:21.564088  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:21.564102  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:21.588591  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:21.588627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.133263  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:24.145210  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:24.145292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:24.172487  299667 cri.go:89] found id: ""
	I1205 07:48:24.172509  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.172517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:24.172523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:24.172582  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:24.197589  299667 cri.go:89] found id: ""
	I1205 07:48:24.197612  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.197634  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:24.197641  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:24.197727  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:24.232698  299667 cri.go:89] found id: ""
	I1205 07:48:24.232773  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.232803  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:24.232821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:24.232927  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:24.261831  299667 cri.go:89] found id: ""
	I1205 07:48:24.261854  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.261863  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:24.261870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:24.261932  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:24.290390  299667 cri.go:89] found id: ""
	I1205 07:48:24.290412  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.290420  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:24.290426  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:24.290486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:24.314257  299667 cri.go:89] found id: ""
	I1205 07:48:24.314327  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.314360  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:24.314383  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:24.314475  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:24.338446  299667 cri.go:89] found id: ""
	I1205 07:48:24.338469  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.338477  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:24.338484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:24.338542  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:24.366265  299667 cri.go:89] found id: ""
	I1205 07:48:24.366302  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.366314  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:24.366323  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:24.366335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:24.398722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:24.398759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.430842  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:24.430872  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:24.486913  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:24.486947  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:24.500309  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:24.500333  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:24.571107  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:25.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:28.102336  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:27.072799  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:27.082983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:27.083049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:27.106973  299667 cri.go:89] found id: ""
	I1205 07:48:27.106997  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.107005  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:27.107012  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:27.107072  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:27.131580  299667 cri.go:89] found id: ""
	I1205 07:48:27.131604  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.131613  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:27.131619  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:27.131679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:27.156330  299667 cri.go:89] found id: ""
	I1205 07:48:27.156356  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.156364  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:27.156371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:27.156434  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:27.180350  299667 cri.go:89] found id: ""
	I1205 07:48:27.180375  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.180384  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:27.180391  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:27.180449  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:27.204756  299667 cri.go:89] found id: ""
	I1205 07:48:27.204779  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.204787  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:27.204800  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:27.204858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:27.232181  299667 cri.go:89] found id: ""
	I1205 07:48:27.232207  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.232216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:27.232223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:27.232299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:27.258059  299667 cri.go:89] found id: ""
	I1205 07:48:27.258086  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.258095  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:27.258102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:27.258165  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:27.281695  299667 cri.go:89] found id: ""
	I1205 07:48:27.281717  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.281725  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:27.281734  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:27.281746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:27.294855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:27.294880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:27.362846  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:27.362868  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:27.362880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:27.389761  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:27.389791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:27.422138  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:27.422165  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:29.980506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:29.990724  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:29.990791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:30.035211  299667 cri.go:89] found id: ""
	I1205 07:48:30.035238  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.035248  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:30.035256  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:30.035326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:30.063908  299667 cri.go:89] found id: ""
	I1205 07:48:30.063944  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.063953  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:30.063960  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:30.064034  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	W1205 07:48:30.103232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:32.602298  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:34.602332  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:30.095785  299667 cri.go:89] found id: ""
	I1205 07:48:30.095860  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.095883  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:30.095908  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:30.096002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:30.123133  299667 cri.go:89] found id: ""
	I1205 07:48:30.123156  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.123166  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:30.123172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:30.123235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:30.149862  299667 cri.go:89] found id: ""
	I1205 07:48:30.149885  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.149894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:30.149901  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:30.150013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:30.175817  299667 cri.go:89] found id: ""
	I1205 07:48:30.175883  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.175903  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:30.175920  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:30.176005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:30.201607  299667 cri.go:89] found id: ""
	I1205 07:48:30.201631  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.201640  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:30.201646  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:30.201711  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:30.227899  299667 cri.go:89] found id: ""
	I1205 07:48:30.227922  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.227931  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:30.227940  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:30.227952  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:30.241708  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:30.241742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:30.309566  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:30.309584  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:30.309597  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:30.334740  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:30.334771  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:30.378494  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:30.378524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:32.939968  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:32.950759  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:32.950832  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:32.978406  299667 cri.go:89] found id: ""
	I1205 07:48:32.978430  299667 logs.go:282] 0 containers: []
	W1205 07:48:32.978438  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:32.978454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:32.978513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:33.008532  299667 cri.go:89] found id: ""
	I1205 07:48:33.008559  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.008568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:33.008574  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:33.008650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:33.033972  299667 cri.go:89] found id: ""
	I1205 07:48:33.033997  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.034005  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:33.034013  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:33.034081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:33.059992  299667 cri.go:89] found id: ""
	I1205 07:48:33.060014  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.060023  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:33.060029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:33.060094  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:33.090354  299667 cri.go:89] found id: ""
	I1205 07:48:33.090379  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.090387  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:33.090395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:33.090454  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:33.114706  299667 cri.go:89] found id: ""
	I1205 07:48:33.114735  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.114744  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:33.114751  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:33.114809  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:33.140456  299667 cri.go:89] found id: ""
	I1205 07:48:33.140481  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.140490  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:33.140496  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:33.140557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:33.169438  299667 cri.go:89] found id: ""
	I1205 07:48:33.169461  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.169469  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:33.169478  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:33.169490  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:33.195155  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:33.195189  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:33.221590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:33.221617  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:33.277078  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:33.277110  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:33.290419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:33.290445  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:33.357621  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:36.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:38.602933  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:35.857840  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:35.869455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:35.869525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:35.904563  299667 cri.go:89] found id: ""
	I1205 07:48:35.904585  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.904594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:35.904601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:35.904664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:35.932592  299667 cri.go:89] found id: ""
	I1205 07:48:35.932613  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.932622  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:35.932628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:35.932690  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:35.961011  299667 cri.go:89] found id: ""
	I1205 07:48:35.961033  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.961048  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:35.961055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:35.961121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:35.988109  299667 cri.go:89] found id: ""
	I1205 07:48:35.988131  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.988139  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:35.988146  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:35.988212  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:36.021866  299667 cri.go:89] found id: ""
	I1205 07:48:36.021894  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.021903  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:36.021910  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:36.021980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:36.053675  299667 cri.go:89] found id: ""
	I1205 07:48:36.053697  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.053706  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:36.053713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:36.053773  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:36.088227  299667 cri.go:89] found id: ""
	I1205 07:48:36.088252  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.088261  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:36.088268  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:36.088330  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:36.114723  299667 cri.go:89] found id: ""
	I1205 07:48:36.114753  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.114762  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:36.114772  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:36.114792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:36.130077  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:36.130105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:36.199710  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:36.199733  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:36.199746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:36.224920  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:36.224953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:36.260346  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:36.260373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:38.818746  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:38.829029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:38.829103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:38.861723  299667 cri.go:89] found id: ""
	I1205 07:48:38.861746  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.861755  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:38.861761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:38.861827  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:38.889749  299667 cri.go:89] found id: ""
	I1205 07:48:38.889772  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.889781  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:38.889787  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:38.889849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:38.925308  299667 cri.go:89] found id: ""
	I1205 07:48:38.925337  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.925346  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:38.925352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:38.925412  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:38.955710  299667 cri.go:89] found id: ""
	I1205 07:48:38.955732  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.955740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:38.955746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:38.955803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:38.980907  299667 cri.go:89] found id: ""
	I1205 07:48:38.980934  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.980943  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:38.980951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:38.981013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:39.011368  299667 cri.go:89] found id: ""
	I1205 07:48:39.011398  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.011409  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:39.011416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:39.011489  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:39.037693  299667 cri.go:89] found id: ""
	I1205 07:48:39.037719  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.037727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:39.037734  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:39.037806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:39.063915  299667 cri.go:89] found id: ""
	I1205 07:48:39.063940  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.063949  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:39.063957  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:39.063969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:39.120923  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:39.120960  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:39.134276  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:39.134302  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:39.194044  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:39.194064  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:39.194076  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:39.218536  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:39.218569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:41.102495  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:43.102732  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:41.747231  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:41.758180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:41.758258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:41.785400  299667 cri.go:89] found id: ""
	I1205 07:48:41.785426  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.785435  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:41.785442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:41.785509  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:41.817641  299667 cri.go:89] found id: ""
	I1205 07:48:41.817667  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.817676  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:41.817683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:41.817747  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:41.842820  299667 cri.go:89] found id: ""
	I1205 07:48:41.842846  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.842855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:41.842869  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:41.842933  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:41.880166  299667 cri.go:89] found id: ""
	I1205 07:48:41.880194  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.880208  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:41.880214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:41.880291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:41.911193  299667 cri.go:89] found id: ""
	I1205 07:48:41.911258  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.911273  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:41.911281  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:41.911337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:41.935720  299667 cri.go:89] found id: ""
	I1205 07:48:41.935745  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.935754  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:41.935761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:41.935823  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:41.962907  299667 cri.go:89] found id: ""
	I1205 07:48:41.962976  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.962992  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:41.962998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:41.963065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:41.991087  299667 cri.go:89] found id: ""
	I1205 07:48:41.991113  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.991121  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:41.991130  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:41.991140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:42.070025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:42.070073  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:42.086499  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:42.086528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:42.164053  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:42.164130  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:42.164162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:42.192298  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:42.192342  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:44.734604  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:44.745356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:44.745423  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:44.770206  299667 cri.go:89] found id: ""
	I1205 07:48:44.770230  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.770239  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:44.770247  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:44.770305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:44.796086  299667 cri.go:89] found id: ""
	I1205 07:48:44.796109  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.796118  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:44.796124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:44.796182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:44.822053  299667 cri.go:89] found id: ""
	I1205 07:48:44.822125  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.822148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:44.822167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:44.822258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:44.855227  299667 cri.go:89] found id: ""
	I1205 07:48:44.855298  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.855320  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:44.855339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:44.855422  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:44.884787  299667 cri.go:89] found id: ""
	I1205 07:48:44.884859  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.885835  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:44.885875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:44.885967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:44.922015  299667 cri.go:89] found id: ""
	I1205 07:48:44.922040  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.922048  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:44.922055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:44.922120  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:44.946942  299667 cri.go:89] found id: ""
	I1205 07:48:44.946979  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.946988  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:44.946995  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:44.947056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:44.972229  299667 cri.go:89] found id: ""
	I1205 07:48:44.972253  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.972262  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:44.972270  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:44.972280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:44.997401  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:44.997434  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:45.054576  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:45.054602  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:45.102947  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:47.602661  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:45.133742  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:45.133782  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:45.155399  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:45.155496  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:45.257582  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:47.759254  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:47.770034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:47.770107  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:47.799850  299667 cri.go:89] found id: ""
	I1205 07:48:47.799873  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.799882  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:47.799889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:47.799947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:47.824989  299667 cri.go:89] found id: ""
	I1205 07:48:47.825014  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.825022  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:47.825028  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:47.825089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:47.857967  299667 cri.go:89] found id: ""
	I1205 07:48:47.857993  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.858002  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:47.858008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:47.858065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:47.890800  299667 cri.go:89] found id: ""
	I1205 07:48:47.890833  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.890842  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:47.890851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:47.890911  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:47.921850  299667 cri.go:89] found id: ""
	I1205 07:48:47.921874  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.921883  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:47.921890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:47.921950  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:47.946404  299667 cri.go:89] found id: ""
	I1205 07:48:47.946426  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.946435  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:47.946442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:47.946501  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:47.972095  299667 cri.go:89] found id: ""
	I1205 07:48:47.972117  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.972125  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:47.972131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:47.972189  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:47.996555  299667 cri.go:89] found id: ""
	I1205 07:48:47.996577  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.996585  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:47.996594  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:47.996605  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:48.054087  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:48.054122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:48.069006  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:48.069038  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:48.132946  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:48.132968  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:48.132981  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:48.158949  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:48.158986  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:50.102346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:52.103160  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:54.602949  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:50.687838  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:50.698642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:50.698712  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:50.725092  299667 cri.go:89] found id: ""
	I1205 07:48:50.725113  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.725121  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:50.725128  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:50.725208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:50.750131  299667 cri.go:89] found id: ""
	I1205 07:48:50.750153  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.750161  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:50.750167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:50.750233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:50.774733  299667 cri.go:89] found id: ""
	I1205 07:48:50.774755  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.774765  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:50.774773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:50.774858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:50.803492  299667 cri.go:89] found id: ""
	I1205 07:48:50.803514  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.803524  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:50.803531  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:50.803596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:50.828915  299667 cri.go:89] found id: ""
	I1205 07:48:50.828938  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.828947  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:50.828953  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:50.829022  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:50.862065  299667 cri.go:89] found id: ""
	I1205 07:48:50.862090  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.862098  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:50.862105  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:50.862168  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:50.888327  299667 cri.go:89] found id: ""
	I1205 07:48:50.888356  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.888365  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:50.888371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:50.888432  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:50.917551  299667 cri.go:89] found id: ""
	I1205 07:48:50.917583  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.917592  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:50.917601  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:50.917613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:50.976691  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:50.976725  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:50.990259  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:50.990285  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:51.057592  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:51.057614  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:51.057628  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:51.088874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:51.088916  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.619589  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:53.630457  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:53.630521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:53.662396  299667 cri.go:89] found id: ""
	I1205 07:48:53.662420  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.662429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:53.662435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:53.662493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:53.687365  299667 cri.go:89] found id: ""
	I1205 07:48:53.687393  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.687402  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:53.687408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:53.687469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:53.711757  299667 cri.go:89] found id: ""
	I1205 07:48:53.711782  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.711791  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:53.711798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:53.711893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:53.735695  299667 cri.go:89] found id: ""
	I1205 07:48:53.735721  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.735730  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:53.735736  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:53.735793  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:53.763008  299667 cri.go:89] found id: ""
	I1205 07:48:53.763032  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.763041  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:53.763047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:53.763104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:53.791424  299667 cri.go:89] found id: ""
	I1205 07:48:53.791498  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.791520  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:53.791537  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:53.791617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:53.815855  299667 cri.go:89] found id: ""
	I1205 07:48:53.815876  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.815884  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:53.815890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:53.815946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:53.839524  299667 cri.go:89] found id: ""
	I1205 07:48:53.839548  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.839557  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:53.839565  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:53.839577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.884515  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:53.884591  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:53.947646  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:53.947682  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:53.961152  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:53.961211  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:54.031297  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:54.031321  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:54.031335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:48:57.102570  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:59.102902  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:56.557021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:56.567576  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:56.567694  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:56.596257  299667 cri.go:89] found id: ""
	I1205 07:48:56.596291  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.596300  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:56.596306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:56.596381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:56.627549  299667 cri.go:89] found id: ""
	I1205 07:48:56.627575  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.627583  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:56.627590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:56.627649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:56.661291  299667 cri.go:89] found id: ""
	I1205 07:48:56.661313  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.661321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:56.661332  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:56.661391  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:56.687435  299667 cri.go:89] found id: ""
	I1205 07:48:56.687462  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.687471  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:56.687477  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:56.687540  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:56.712238  299667 cri.go:89] found id: ""
	I1205 07:48:56.712261  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.712271  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:56.712277  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:56.712340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:56.736638  299667 cri.go:89] found id: ""
	I1205 07:48:56.736663  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.736672  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:56.736690  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:56.736748  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:56.760967  299667 cri.go:89] found id: ""
	I1205 07:48:56.761001  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.761010  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:56.761016  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:56.761075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:56.784912  299667 cri.go:89] found id: ""
	I1205 07:48:56.784939  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.784947  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:56.784958  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:56.784969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:56.808701  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:56.808734  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:56.835856  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:56.835884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:56.896082  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:56.896154  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:56.914235  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:56.914310  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:56.981742  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.483411  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:59.494080  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:59.494149  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:59.521983  299667 cri.go:89] found id: ""
	I1205 07:48:59.522007  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.522015  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:59.522023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:59.522081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:59.547605  299667 cri.go:89] found id: ""
	I1205 07:48:59.547637  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.547646  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:59.547652  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:59.547718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:59.572816  299667 cri.go:89] found id: ""
	I1205 07:48:59.572839  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.572847  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:59.572854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:59.572909  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:59.598049  299667 cri.go:89] found id: ""
	I1205 07:48:59.598070  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.598078  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:59.598085  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:59.598145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:59.624907  299667 cri.go:89] found id: ""
	I1205 07:48:59.624928  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.624937  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:59.624943  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:59.625001  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:59.651926  299667 cri.go:89] found id: ""
	I1205 07:48:59.651947  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.651955  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:59.651962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:59.652019  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:59.680003  299667 cri.go:89] found id: ""
	I1205 07:48:59.680080  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.680103  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:59.680120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:59.680228  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:59.705437  299667 cri.go:89] found id: ""
	I1205 07:48:59.705465  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.705474  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:59.705483  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:59.705493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:59.763111  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:59.763142  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:59.777300  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:59.777368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:59.842575  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.842643  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:59.842663  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:59.869833  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:59.869908  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:01.602955  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:04.102698  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:02.402084  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:02.412782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:02.412851  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:02.438256  299667 cri.go:89] found id: ""
	I1205 07:49:02.438279  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.438287  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:02.438294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:02.438352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:02.465899  299667 cri.go:89] found id: ""
	I1205 07:49:02.465926  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.465935  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:02.465942  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:02.466005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:02.490481  299667 cri.go:89] found id: ""
	I1205 07:49:02.490503  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.490513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:02.490519  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:02.490586  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:02.516169  299667 cri.go:89] found id: ""
	I1205 07:49:02.516196  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.516205  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:02.516211  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:02.516271  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:02.541403  299667 cri.go:89] found id: ""
	I1205 07:49:02.541429  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.541439  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:02.541445  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:02.541507  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:02.566995  299667 cri.go:89] found id: ""
	I1205 07:49:02.567017  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.567025  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:02.567032  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:02.567099  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:02.597621  299667 cri.go:89] found id: ""
	I1205 07:49:02.597644  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.597652  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:02.597657  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:02.597716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:02.628924  299667 cri.go:89] found id: ""
	I1205 07:49:02.628951  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.628960  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:02.628969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:02.628980  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:02.693315  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:02.693348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:02.707066  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:02.707162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:02.771707  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:02.771729  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:02.771742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:02.797113  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:02.797145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:06.603033  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:09.102351  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:05.326530  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:05.336990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:05.337057  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:05.360427  299667 cri.go:89] found id: ""
	I1205 07:49:05.360451  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.360460  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:05.360466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:05.360525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:05.384196  299667 cri.go:89] found id: ""
	I1205 07:49:05.384222  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.384230  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:05.384237  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:05.384299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:05.410321  299667 cri.go:89] found id: ""
	I1205 07:49:05.410344  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.410352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:05.410358  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:05.410417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:05.433726  299667 cri.go:89] found id: ""
	I1205 07:49:05.433793  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.433815  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:05.433833  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:05.433921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:05.458853  299667 cri.go:89] found id: ""
	I1205 07:49:05.458924  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.458940  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:05.458947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:05.459008  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:05.482445  299667 cri.go:89] found id: ""
	I1205 07:49:05.482514  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.482529  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:05.482538  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:05.482610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:05.507192  299667 cri.go:89] found id: ""
	I1205 07:49:05.507260  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.507282  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:05.507300  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:05.507393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:05.532405  299667 cri.go:89] found id: ""
	I1205 07:49:05.532439  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.532448  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:05.532459  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:05.532470  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:05.587713  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:05.587744  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:05.600994  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:05.601062  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:05.676675  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:05.676745  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:05.676770  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:05.700917  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:05.700948  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.230743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:08.241254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:08.241324  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:08.265687  299667 cri.go:89] found id: ""
	I1205 07:49:08.265765  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.265781  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:08.265789  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:08.265873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:08.291182  299667 cri.go:89] found id: ""
	I1205 07:49:08.291212  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.291222  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:08.291230  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:08.291288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:08.316404  299667 cri.go:89] found id: ""
	I1205 07:49:08.316431  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.316439  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:08.316446  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:08.316503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:08.342004  299667 cri.go:89] found id: ""
	I1205 07:49:08.342030  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.342038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:08.342044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:08.342103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:08.370679  299667 cri.go:89] found id: ""
	I1205 07:49:08.370700  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.370708  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:08.370715  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:08.370791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:08.398788  299667 cri.go:89] found id: ""
	I1205 07:49:08.398848  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.398880  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:08.398896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:08.398967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:08.427499  299667 cri.go:89] found id: ""
	I1205 07:49:08.427532  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.427552  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:08.427560  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:08.427627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:08.455982  299667 cri.go:89] found id: ""
	I1205 07:49:08.456008  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.456016  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:08.456025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:08.456037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:08.469660  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:08.469687  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:08.534660  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:08.534684  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:08.534697  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:08.560195  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:08.560228  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.590035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:08.590061  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:49:11.102705  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:13.103312  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:11.150392  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:11.161108  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:11.161194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:11.185243  299667 cri.go:89] found id: ""
	I1205 07:49:11.185264  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.185273  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:11.185280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:11.185338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:11.208758  299667 cri.go:89] found id: ""
	I1205 07:49:11.208797  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.208806  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:11.208815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:11.208884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:11.235054  299667 cri.go:89] found id: ""
	I1205 07:49:11.235077  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.235086  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:11.235092  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:11.235157  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:11.259045  299667 cri.go:89] found id: ""
	I1205 07:49:11.259068  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.259076  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:11.259082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:11.259143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:11.288257  299667 cri.go:89] found id: ""
	I1205 07:49:11.288282  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.288291  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:11.288298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:11.288354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:11.312884  299667 cri.go:89] found id: ""
	I1205 07:49:11.312906  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.312914  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:11.312922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:11.312978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:11.341317  299667 cri.go:89] found id: ""
	I1205 07:49:11.341340  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.341348  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:11.341354  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:11.341411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:11.365207  299667 cri.go:89] found id: ""
	I1205 07:49:11.365234  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.365243  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:11.365260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:11.365271  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:11.423587  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:11.423619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:11.437723  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:11.437796  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:11.504822  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:11.504896  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:11.504935  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:11.529753  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:11.529791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:14.059148  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:14.069586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:14.069676  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:14.103804  299667 cri.go:89] found id: ""
	I1205 07:49:14.103828  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.103837  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:14.103843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:14.103901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:14.135010  299667 cri.go:89] found id: ""
	I1205 07:49:14.135031  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.135040  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:14.135045  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:14.135104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:14.170829  299667 cri.go:89] found id: ""
	I1205 07:49:14.170851  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.170859  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:14.170865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:14.170926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:14.199693  299667 cri.go:89] found id: ""
	I1205 07:49:14.199715  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.199724  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:14.199730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:14.199789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:14.223902  299667 cri.go:89] found id: ""
	I1205 07:49:14.223924  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.223931  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:14.223937  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:14.224003  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:14.247854  299667 cri.go:89] found id: ""
	I1205 07:49:14.247926  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.247950  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:14.247969  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:14.248063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:14.272146  299667 cri.go:89] found id: ""
	I1205 07:49:14.272219  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.272250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:14.272270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:14.272375  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:14.297307  299667 cri.go:89] found id: ""
	I1205 07:49:14.297377  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.297404  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:14.297421  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:14.297436  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:14.352148  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:14.352181  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:14.365391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:14.365420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:14.429045  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:14.429068  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:14.429080  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:14.453460  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:14.453494  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:15.602762  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:17.602959  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:16.984086  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:16.994499  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:16.994567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:17.022900  299667 cri.go:89] found id: ""
	I1205 07:49:17.022923  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.022932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:17.022939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:17.022997  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:17.047244  299667 cri.go:89] found id: ""
	I1205 07:49:17.047318  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.047332  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:17.047339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:17.047415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:17.070683  299667 cri.go:89] found id: ""
	I1205 07:49:17.070716  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.070725  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:17.070732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:17.070811  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:17.104238  299667 cri.go:89] found id: ""
	I1205 07:49:17.104310  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.104332  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:17.104351  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:17.104433  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:17.130787  299667 cri.go:89] found id: ""
	I1205 07:49:17.130867  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.130890  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:17.130907  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:17.131014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:17.159177  299667 cri.go:89] found id: ""
	I1205 07:49:17.159212  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.159221  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:17.159228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:17.159293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:17.187127  299667 cri.go:89] found id: ""
	I1205 07:49:17.187148  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.187157  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:17.187168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:17.187225  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:17.214608  299667 cri.go:89] found id: ""
	I1205 07:49:17.214633  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.214641  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:17.214650  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:17.214690  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:17.227937  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:17.227964  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:17.290517  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:17.290581  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:17.290600  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:17.315039  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:17.315074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:17.343285  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:17.343348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:19.899406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:19.910597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:19.910679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:19.935640  299667 cri.go:89] found id: ""
	I1205 07:49:19.935664  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.935673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:19.935679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:19.935736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:19.959309  299667 cri.go:89] found id: ""
	I1205 07:49:19.959336  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.959345  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:19.959352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:19.959418  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:19.982862  299667 cri.go:89] found id: ""
	I1205 07:49:19.982884  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.982893  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:19.982899  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:19.982957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:20.016784  299667 cri.go:89] found id: ""
	I1205 07:49:20.016810  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.016819  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:20.016826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:20.016893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:20.044555  299667 cri.go:89] found id: ""
	I1205 07:49:20.044580  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.044590  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:20.044597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:20.044657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:20.080570  299667 cri.go:89] found id: ""
	I1205 07:49:20.080595  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.080603  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:20.080610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:20.080689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1205 07:49:20.102423  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:22.102493  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:24.602330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:20.112802  299667 cri.go:89] found id: ""
	I1205 07:49:20.112829  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.112838  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:20.112852  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:20.112912  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:20.145614  299667 cri.go:89] found id: ""
	I1205 07:49:20.145642  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.145650  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:20.145659  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:20.145670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:20.208200  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:20.208233  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:20.222391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:20.222422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:20.285471  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:20.285500  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:20.285513  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:20.311384  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:20.311415  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:22.840933  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:22.854843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:22.854939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:22.881572  299667 cri.go:89] found id: ""
	I1205 07:49:22.881598  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.881608  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:22.881614  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:22.881677  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:22.917647  299667 cri.go:89] found id: ""
	I1205 07:49:22.917677  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.917686  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:22.917692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:22.917750  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:22.943325  299667 cri.go:89] found id: ""
	I1205 07:49:22.943346  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.943355  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:22.943362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:22.943426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:22.967894  299667 cri.go:89] found id: ""
	I1205 07:49:22.967955  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.967979  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:22.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:22.968076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:22.994911  299667 cri.go:89] found id: ""
	I1205 07:49:22.994976  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.994991  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:22.994998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:22.995056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:23.022399  299667 cri.go:89] found id: ""
	I1205 07:49:23.022464  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.022486  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:23.022506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:23.022581  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:23.048262  299667 cri.go:89] found id: ""
	I1205 07:49:23.048283  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.048291  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:23.048297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:23.048355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:23.072655  299667 cri.go:89] found id: ""
	I1205 07:49:23.072684  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.072694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:23.072702  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:23.072720  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:23.132711  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:23.132742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:23.146553  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:23.146576  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:23.218207  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:23.218230  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:23.218243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:23.242426  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:23.242462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:27.102316  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:29.602939  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:25.772926  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:25.783467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:25.783546  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:25.811044  299667 cri.go:89] found id: ""
	I1205 07:49:25.811066  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.811075  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:25.811081  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:25.811139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:25.835534  299667 cri.go:89] found id: ""
	I1205 07:49:25.835558  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.835568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:25.835575  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:25.835637  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:25.866938  299667 cri.go:89] found id: ""
	I1205 07:49:25.866966  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.866974  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:25.866981  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:25.867043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:25.897273  299667 cri.go:89] found id: ""
	I1205 07:49:25.897302  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.897313  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:25.897320  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:25.897380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:25.923461  299667 cri.go:89] found id: ""
	I1205 07:49:25.923489  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.923497  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:25.923504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:25.923590  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:25.946791  299667 cri.go:89] found id: ""
	I1205 07:49:25.946813  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.946822  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:25.946828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:25.946885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:25.971479  299667 cri.go:89] found id: ""
	I1205 07:49:25.971507  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.971515  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:25.971521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:25.971580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:25.994965  299667 cri.go:89] found id: ""
	I1205 07:49:25.994986  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.994994  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:25.995003  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:25.995014  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:26.058667  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:26.058701  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:26.073089  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:26.073119  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:26.150334  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:26.150355  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:26.150367  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:26.182077  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:26.182109  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:28.710700  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:28.722142  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:28.722208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:28.749003  299667 cri.go:89] found id: ""
	I1205 07:49:28.749029  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.749037  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:28.749044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:28.749101  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:28.774112  299667 cri.go:89] found id: ""
	I1205 07:49:28.774141  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.774152  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:28.774158  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:28.774215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:28.797966  299667 cri.go:89] found id: ""
	I1205 07:49:28.797987  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.797996  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:28.798002  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:28.798058  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:28.825668  299667 cri.go:89] found id: ""
	I1205 07:49:28.825694  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.825703  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:28.825709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:28.825788  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:28.856952  299667 cri.go:89] found id: ""
	I1205 07:49:28.856986  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.857001  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:28.857008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:28.857091  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:28.882695  299667 cri.go:89] found id: ""
	I1205 07:49:28.882730  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.882746  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:28.882753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:28.882822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:28.909550  299667 cri.go:89] found id: ""
	I1205 07:49:28.909584  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.909594  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:28.909601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:28.909671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:28.942251  299667 cri.go:89] found id: ""
	I1205 07:49:28.942319  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.942340  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:28.942362  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:28.942387  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:29.005506  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:29.005539  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:29.005554  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:29.030880  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:29.030910  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:29.058353  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:29.058381  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:29.121228  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:29.121304  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:49:32.102320  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:34.103275  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:31.636506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:31.647234  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:31.647305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:31.672508  299667 cri.go:89] found id: ""
	I1205 07:49:31.672530  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.672539  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:31.672545  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:31.672603  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:31.696860  299667 cri.go:89] found id: ""
	I1205 07:49:31.696885  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.696894  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:31.696900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:31.696970  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:31.722649  299667 cri.go:89] found id: ""
	I1205 07:49:31.722676  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.722685  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:31.722692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:31.722770  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:31.748068  299667 cri.go:89] found id: ""
	I1205 07:49:31.748093  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.748101  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:31.748109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:31.748169  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:31.773290  299667 cri.go:89] found id: ""
	I1205 07:49:31.773315  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.773324  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:31.773330  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:31.773393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:31.804425  299667 cri.go:89] found id: ""
	I1205 07:49:31.804445  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.804454  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:31.804461  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:31.804521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:31.829116  299667 cri.go:89] found id: ""
	I1205 07:49:31.829137  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.829146  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:31.829152  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:31.829241  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:31.867330  299667 cri.go:89] found id: ""
	I1205 07:49:31.867406  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.867418  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:31.867427  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:31.867438  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:31.931647  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:31.931680  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:31.945211  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:31.945236  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:32.004694  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:32.004719  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:32.004738  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:32.031538  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:32.031572  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:34.562576  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:34.573366  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:34.573477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:34.599238  299667 cri.go:89] found id: ""
	I1205 07:49:34.599262  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.599272  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:34.599279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:34.599342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:34.624561  299667 cri.go:89] found id: ""
	I1205 07:49:34.624589  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.624598  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:34.624604  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:34.624666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:34.649603  299667 cri.go:89] found id: ""
	I1205 07:49:34.649624  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.649637  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:34.649644  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:34.649707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:34.674019  299667 cri.go:89] found id: ""
	I1205 07:49:34.674043  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.674052  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:34.674058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:34.674121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:34.700890  299667 cri.go:89] found id: ""
	I1205 07:49:34.700912  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.700921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:34.700928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:34.700988  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:34.727454  299667 cri.go:89] found id: ""
	I1205 07:49:34.727482  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.727491  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:34.727498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:34.727558  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:34.753086  299667 cri.go:89] found id: ""
	I1205 07:49:34.753107  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.753115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:34.753120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:34.753208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:34.779077  299667 cri.go:89] found id: ""
	I1205 07:49:34.779100  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.779109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:34.779118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:34.779129  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:34.839330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:34.839368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:34.857129  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:34.857175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:34.932420  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:34.932440  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:34.932452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:34.957616  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:34.957649  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:36.602677  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:39.102319  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:37.486529  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:37.496909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:37.496977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:37.521254  299667 cri.go:89] found id: ""
	I1205 07:49:37.521315  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.521349  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:37.521372  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:37.521462  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:37.544759  299667 cri.go:89] found id: ""
	I1205 07:49:37.544782  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.544791  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:37.544798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:37.544854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:37.569519  299667 cri.go:89] found id: ""
	I1205 07:49:37.569549  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.569558  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:37.569564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:37.569624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:37.593917  299667 cri.go:89] found id: ""
	I1205 07:49:37.593938  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.593947  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:37.593954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:37.594014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:37.619915  299667 cri.go:89] found id: ""
	I1205 07:49:37.619940  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.619949  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:37.619955  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:37.620016  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:37.647160  299667 cri.go:89] found id: ""
	I1205 07:49:37.647186  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.647195  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:37.647202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:37.647261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:37.672076  299667 cri.go:89] found id: ""
	I1205 07:49:37.672097  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.672105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:37.672111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:37.672170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:37.697550  299667 cri.go:89] found id: ""
	I1205 07:49:37.697573  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.697581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:37.697590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:37.697601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:37.754073  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:37.754105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:37.769043  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:37.769071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:37.831338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:37.831359  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:37.831371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:37.857528  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:37.857564  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:41.602800  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:44.102845  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:40.404513  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:40.415071  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:40.415143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:40.439261  299667 cri.go:89] found id: ""
	I1205 07:49:40.439283  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.439291  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:40.439298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:40.439355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:40.464063  299667 cri.go:89] found id: ""
	I1205 07:49:40.464084  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.464092  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:40.464098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:40.464158  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:40.490322  299667 cri.go:89] found id: ""
	I1205 07:49:40.490344  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.490352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:40.490359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:40.490419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:40.517055  299667 cri.go:89] found id: ""
	I1205 07:49:40.517078  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.517087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:40.517093  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:40.517151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:40.545250  299667 cri.go:89] found id: ""
	I1205 07:49:40.545273  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.545282  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:40.545288  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:40.545348  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:40.569118  299667 cri.go:89] found id: ""
	I1205 07:49:40.569142  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.569151  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:40.569188  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:40.569248  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:40.593152  299667 cri.go:89] found id: ""
	I1205 07:49:40.593209  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.593217  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:40.593223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:40.593287  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:40.617285  299667 cri.go:89] found id: ""
	I1205 07:49:40.617308  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.617316  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:40.617325  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:40.617336  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:40.681518  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:40.681540  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:40.681553  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:40.707309  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:40.707347  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:40.740118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:40.740145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:40.798971  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:40.799001  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.313313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:43.324257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:43.324337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:43.356730  299667 cri.go:89] found id: ""
	I1205 07:49:43.356755  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.356763  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:43.356770  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:43.356828  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:43.386071  299667 cri.go:89] found id: ""
	I1205 07:49:43.386097  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.386106  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:43.386112  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:43.386172  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:43.415579  299667 cri.go:89] found id: ""
	I1205 07:49:43.415606  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.415615  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:43.415621  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:43.415679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:43.441039  299667 cri.go:89] found id: ""
	I1205 07:49:43.441064  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.441075  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:43.441082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:43.441141  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:43.466399  299667 cri.go:89] found id: ""
	I1205 07:49:43.466432  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.466442  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:43.466449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:43.466519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:43.497264  299667 cri.go:89] found id: ""
	I1205 07:49:43.497309  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.497319  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:43.497326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:43.497397  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:43.522221  299667 cri.go:89] found id: ""
	I1205 07:49:43.522247  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.522256  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:43.522262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:43.522325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:43.546887  299667 cri.go:89] found id: ""
	I1205 07:49:43.546953  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.546969  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:43.546980  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:43.546992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:43.613596  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:43.613644  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.628794  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:43.628825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:43.698835  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:43.698854  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:43.698866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:43.725776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:43.725811  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:46.103222  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:48.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:46.256365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:46.267583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:46.267659  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:46.296652  299667 cri.go:89] found id: ""
	I1205 07:49:46.296679  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.296687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:46.296694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:46.296760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:46.323489  299667 cri.go:89] found id: ""
	I1205 07:49:46.323514  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.323522  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:46.323529  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:46.323593  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:46.355225  299667 cri.go:89] found id: ""
	I1205 07:49:46.355249  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.355258  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:46.355265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:46.355340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:46.383644  299667 cri.go:89] found id: ""
	I1205 07:49:46.383678  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.383687  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:46.383694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:46.383768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:46.421484  299667 cri.go:89] found id: ""
	I1205 07:49:46.421518  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.421527  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:46.421533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:46.421602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:46.447032  299667 cri.go:89] found id: ""
	I1205 07:49:46.447057  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.447066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:46.447073  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:46.447136  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:46.472839  299667 cri.go:89] found id: ""
	I1205 07:49:46.472860  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.472867  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:46.472873  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:46.472930  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:46.501395  299667 cri.go:89] found id: ""
	I1205 07:49:46.501422  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.501432  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:46.501441  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:46.501452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:46.558146  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:46.558178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:46.573118  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:46.573146  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:46.637720  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:46.637741  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:46.637754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:46.662623  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:46.662658  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.193341  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:49.204485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:49.204616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:49.235316  299667 cri.go:89] found id: ""
	I1205 07:49:49.235380  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.235403  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:49.235424  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:49.235503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:49.259781  299667 cri.go:89] found id: ""
	I1205 07:49:49.259811  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.259820  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:49.259826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:49.259894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:49.283985  299667 cri.go:89] found id: ""
	I1205 07:49:49.284025  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.284034  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:49.284041  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:49.284123  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:49.312614  299667 cri.go:89] found id: ""
	I1205 07:49:49.312643  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.312652  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:49.312659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:49.312728  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:49.338339  299667 cri.go:89] found id: ""
	I1205 07:49:49.338362  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.338371  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:49.338378  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:49.338444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:49.367532  299667 cri.go:89] found id: ""
	I1205 07:49:49.367557  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.367565  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:49.367572  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:49.367635  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:49.401925  299667 cri.go:89] found id: ""
	I1205 07:49:49.402000  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.402020  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:49.402038  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:49.402122  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:49.428942  299667 cri.go:89] found id: ""
	I1205 07:49:49.428975  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.428993  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:49.429003  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:49.429021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:49.492403  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:49.492426  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:49.492439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:49.517991  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:49.518021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.545729  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:49.545754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:49.601110  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:49.601140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:49:51.102462  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:53.103333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:52.115102  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:52.128449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:52.128522  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:52.158550  299667 cri.go:89] found id: ""
	I1205 07:49:52.158575  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.158584  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:52.158591  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:52.158654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:52.183729  299667 cri.go:89] found id: ""
	I1205 07:49:52.183750  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.183759  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:52.183765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:52.183829  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:52.209241  299667 cri.go:89] found id: ""
	I1205 07:49:52.209269  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.209279  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:52.209286  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:52.209367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:52.234457  299667 cri.go:89] found id: ""
	I1205 07:49:52.234488  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.234497  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:52.234504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:52.234568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:52.258774  299667 cri.go:89] found id: ""
	I1205 07:49:52.258799  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.258808  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:52.258815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:52.258904  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:52.284285  299667 cri.go:89] found id: ""
	I1205 07:49:52.284319  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.284329  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:52.284336  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:52.284406  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:52.311443  299667 cri.go:89] found id: ""
	I1205 07:49:52.311470  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.311479  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:52.311485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:52.311577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:52.335827  299667 cri.go:89] found id: ""
	I1205 07:49:52.335859  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.335868  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:52.335879  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:52.335890  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:52.395851  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:52.395889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:52.410419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:52.410446  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:52.478966  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:52.478997  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:52.479010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:52.504082  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:52.504114  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.031406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:55.042458  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:55.042534  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:55.066642  299667 cri.go:89] found id: ""
	I1205 07:49:55.066667  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.066677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:55.066684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:55.066746  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:49:55.602712  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:58.102265  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:55.091150  299667 cri.go:89] found id: ""
	I1205 07:49:55.091180  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.091189  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:55.091195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:55.091255  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:55.121930  299667 cri.go:89] found id: ""
	I1205 07:49:55.121951  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.121960  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:55.121965  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:55.122023  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:55.149981  299667 cri.go:89] found id: ""
	I1205 07:49:55.150058  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.150079  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:55.150097  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:55.150184  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:55.173681  299667 cri.go:89] found id: ""
	I1205 07:49:55.173704  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.173712  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:55.173718  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:55.173777  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:55.197308  299667 cri.go:89] found id: ""
	I1205 07:49:55.197332  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.197341  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:55.197347  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:55.197403  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:55.223472  299667 cri.go:89] found id: ""
	I1205 07:49:55.223493  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.223502  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:55.223508  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:55.223572  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:55.252432  299667 cri.go:89] found id: ""
	I1205 07:49:55.252457  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.252466  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:55.252474  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:55.252487  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:55.318488  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:55.318520  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:55.318533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:55.343511  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:55.343587  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.386735  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:55.386818  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:55.452457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:55.452497  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:57.966172  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:57.976919  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:57.976991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:58.003394  299667 cri.go:89] found id: ""
	I1205 07:49:58.003420  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.003429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:58.003436  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:58.003505  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:58.040382  299667 cri.go:89] found id: ""
	I1205 07:49:58.040403  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.040411  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:58.040425  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:58.040486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:58.066131  299667 cri.go:89] found id: ""
	I1205 07:49:58.066161  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.066170  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:58.066177  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:58.066236  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:58.092126  299667 cri.go:89] found id: ""
	I1205 07:49:58.092149  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.092157  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:58.092164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:58.092224  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:58.123111  299667 cri.go:89] found id: ""
	I1205 07:49:58.123138  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.123147  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:58.123154  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:58.123215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:58.155898  299667 cri.go:89] found id: ""
	I1205 07:49:58.155920  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.155929  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:58.155936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:58.156002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:58.181658  299667 cri.go:89] found id: ""
	I1205 07:49:58.181684  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.181694  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:58.181700  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:58.181760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:58.211071  299667 cri.go:89] found id: ""
	I1205 07:49:58.211093  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.211102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:58.211111  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:58.211122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:58.271505  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:58.271551  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:58.287071  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:58.287097  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:58.357627  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:58.357680  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:58.357694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:58.388703  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:58.388747  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:00.103169  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:02.602855  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:04.603343  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:00.928058  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:00.939115  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:00.939186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:00.967955  299667 cri.go:89] found id: ""
	I1205 07:50:00.967979  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.967989  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:00.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:00.968054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:00.994981  299667 cri.go:89] found id: ""
	I1205 07:50:00.995006  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.995014  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:00.995022  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:00.995081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:01.020388  299667 cri.go:89] found id: ""
	I1205 07:50:01.020412  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.020421  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:01.020427  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:01.020487  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:01.045771  299667 cri.go:89] found id: ""
	I1205 07:50:01.045796  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.045816  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:01.045839  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:01.045915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:01.072970  299667 cri.go:89] found id: ""
	I1205 07:50:01.072995  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.073004  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:01.073009  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:01.073069  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:01.110343  299667 cri.go:89] found id: ""
	I1205 07:50:01.110365  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.110374  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:01.110382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:01.110442  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:01.143588  299667 cri.go:89] found id: ""
	I1205 07:50:01.143627  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.143669  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:01.143676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:01.143734  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:01.173718  299667 cri.go:89] found id: ""
	I1205 07:50:01.173744  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.173753  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:01.173762  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:01.173775  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:01.240437  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:01.240461  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:01.240475  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:01.265849  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:01.265884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:01.295649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:01.295676  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:01.352457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:01.352493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:03.872935  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:03.884137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:03.884213  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:03.909107  299667 cri.go:89] found id: ""
	I1205 07:50:03.909129  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.909138  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:03.909144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:03.909231  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:03.935188  299667 cri.go:89] found id: ""
	I1205 07:50:03.935217  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.935229  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:03.935235  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:03.935293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:03.960991  299667 cri.go:89] found id: ""
	I1205 07:50:03.961013  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.961023  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:03.961029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:03.961087  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:03.993563  299667 cri.go:89] found id: ""
	I1205 07:50:03.993586  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.993595  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:03.993602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:03.993658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:04.022615  299667 cri.go:89] found id: ""
	I1205 07:50:04.022640  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.022650  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:04.022656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:04.022744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:04.052044  299667 cri.go:89] found id: ""
	I1205 07:50:04.052067  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.052076  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:04.052083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:04.052155  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:04.077688  299667 cri.go:89] found id: ""
	I1205 07:50:04.077766  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.077790  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:04.077798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:04.077873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:04.108745  299667 cri.go:89] found id: ""
	I1205 07:50:04.108772  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.108781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:04.108790  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:04.108806  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:04.124370  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:04.124398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:04.202708  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:04.202730  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:04.202742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:04.228486  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:04.228522  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:04.257187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:04.257214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:07.102231  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:09.102419  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:06.817489  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:06.828313  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:06.828385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:06.852373  299667 cri.go:89] found id: ""
	I1205 07:50:06.852445  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.852468  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:06.852489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:06.852557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:06.877263  299667 cri.go:89] found id: ""
	I1205 07:50:06.877291  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.877300  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:06.877306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:06.877373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:06.902856  299667 cri.go:89] found id: ""
	I1205 07:50:06.902882  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.902892  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:06.902898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:06.902962  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:06.928569  299667 cri.go:89] found id: ""
	I1205 07:50:06.928595  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.928604  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:06.928611  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:06.928689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:06.953448  299667 cri.go:89] found id: ""
	I1205 07:50:06.953481  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.953491  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:06.953498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:06.953567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:06.978486  299667 cri.go:89] found id: ""
	I1205 07:50:06.978557  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.978579  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:06.978592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:06.978653  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:07.004116  299667 cri.go:89] found id: ""
	I1205 07:50:07.004201  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.004245  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:07.004278  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:07.004369  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:07.030912  299667 cri.go:89] found id: ""
	I1205 07:50:07.030946  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.030956  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:07.030966  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:07.030995  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:07.087669  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:07.087703  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:07.102364  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:07.102424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:07.175733  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:07.175756  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:07.175768  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:07.201087  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:07.201120  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.733660  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:09.744254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:09.744322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:09.768703  299667 cri.go:89] found id: ""
	I1205 07:50:09.768725  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.768733  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:09.768740  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:09.768803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:09.792862  299667 cri.go:89] found id: ""
	I1205 07:50:09.792884  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.792892  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:09.792898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:09.792953  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:09.816998  299667 cri.go:89] found id: ""
	I1205 07:50:09.817020  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.817028  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:09.817042  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:09.817098  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:09.846103  299667 cri.go:89] found id: ""
	I1205 07:50:09.846128  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.846137  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:09.846144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:09.846215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:09.869920  299667 cri.go:89] found id: ""
	I1205 07:50:09.869943  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.869952  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:09.869958  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:09.870017  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:09.894186  299667 cri.go:89] found id: ""
	I1205 07:50:09.894207  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.894216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:09.894222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:09.894279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:09.918290  299667 cri.go:89] found id: ""
	I1205 07:50:09.918323  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.918332  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:09.918338  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:09.918404  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:09.942213  299667 cri.go:89] found id: ""
	I1205 07:50:09.942241  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.942250  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:09.942260  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:09.942300  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.971801  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:09.971827  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:10.027693  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:10.027732  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:10.042067  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:10.042095  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:11.102920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:13.602347  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:10.106137  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:10.106162  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:10.106175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.633673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:12.645469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:12.645547  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:12.676971  299667 cri.go:89] found id: ""
	I1205 07:50:12.676997  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.677007  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:12.677014  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:12.677084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:12.702338  299667 cri.go:89] found id: ""
	I1205 07:50:12.702361  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.702370  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:12.702377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:12.702436  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:12.726932  299667 cri.go:89] found id: ""
	I1205 07:50:12.726958  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.726968  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:12.726974  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:12.727054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:12.752194  299667 cri.go:89] found id: ""
	I1205 07:50:12.752231  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.752240  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:12.752246  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:12.752354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:12.777805  299667 cri.go:89] found id: ""
	I1205 07:50:12.777874  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.777897  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:12.777917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:12.777990  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:12.802215  299667 cri.go:89] found id: ""
	I1205 07:50:12.802240  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.802250  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:12.802257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:12.802334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:12.831796  299667 cri.go:89] found id: ""
	I1205 07:50:12.831821  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.831830  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:12.831836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:12.831899  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:12.856886  299667 cri.go:89] found id: ""
	I1205 07:50:12.856912  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.856921  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:12.856930  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:12.856941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:12.870323  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:12.870352  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:12.933303  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:12.933325  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:12.933339  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.958156  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:12.958191  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:12.986132  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:12.986158  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:15.602727  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:17.602807  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:15.543265  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:15.553756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:15.553824  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:15.579618  299667 cri.go:89] found id: ""
	I1205 07:50:15.579641  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.579650  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:15.579656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:15.579719  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:15.615622  299667 cri.go:89] found id: ""
	I1205 07:50:15.615646  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.615654  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:15.615660  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:15.615718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:15.648566  299667 cri.go:89] found id: ""
	I1205 07:50:15.648595  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.648604  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:15.648610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:15.648669  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:15.678106  299667 cri.go:89] found id: ""
	I1205 07:50:15.678132  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.678141  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:15.678147  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:15.678210  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:15.703125  299667 cri.go:89] found id: ""
	I1205 07:50:15.703148  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.703157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:15.703163  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:15.703229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:15.727847  299667 cri.go:89] found id: ""
	I1205 07:50:15.727873  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.727882  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:15.727889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:15.727948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:15.755105  299667 cri.go:89] found id: ""
	I1205 07:50:15.755129  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.755138  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:15.755144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:15.755203  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:15.780309  299667 cri.go:89] found id: ""
	I1205 07:50:15.780334  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.780343  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:15.780351  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:15.780362  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:15.836755  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:15.836788  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:15.850164  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:15.850241  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:15.913792  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:15.913812  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:15.913828  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:15.938310  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:15.938344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.465299  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:18.475870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:18.475939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:18.501780  299667 cri.go:89] found id: ""
	I1205 07:50:18.501806  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.501821  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:18.501828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:18.501886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:18.526890  299667 cri.go:89] found id: ""
	I1205 07:50:18.526920  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.526929  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:18.526936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:18.526996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:18.552506  299667 cri.go:89] found id: ""
	I1205 07:50:18.552531  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.552540  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:18.552546  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:18.552605  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:18.577492  299667 cri.go:89] found id: ""
	I1205 07:50:18.577517  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.577526  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:18.577533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:18.577591  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:18.609705  299667 cri.go:89] found id: ""
	I1205 07:50:18.609731  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.609740  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:18.609746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:18.609804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:18.637216  299667 cri.go:89] found id: ""
	I1205 07:50:18.637242  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.637251  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:18.637258  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:18.637315  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:18.663025  299667 cri.go:89] found id: ""
	I1205 07:50:18.663051  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.663060  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:18.663067  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:18.663145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:18.689022  299667 cri.go:89] found id: ""
	I1205 07:50:18.689086  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.689109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:18.689131  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:18.689192  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:18.703250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:18.703279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:18.768192  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:18.768211  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:18.768223  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:18.793554  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:18.793585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.828893  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:18.828920  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:20.102540  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:22.602506  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:24.602962  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:21.385309  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:21.397376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:21.397451  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:21.424618  299667 cri.go:89] found id: ""
	I1205 07:50:21.424642  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.424652  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:21.424659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:21.424717  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:21.451181  299667 cri.go:89] found id: ""
	I1205 07:50:21.451202  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.451211  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:21.451217  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:21.451275  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:21.475206  299667 cri.go:89] found id: ""
	I1205 07:50:21.475228  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.475237  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:21.475243  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:21.475300  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:21.505637  299667 cri.go:89] found id: ""
	I1205 07:50:21.505663  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.505672  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:21.505679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:21.505738  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:21.534466  299667 cri.go:89] found id: ""
	I1205 07:50:21.534541  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.534557  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:21.534579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:21.534644  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:21.560428  299667 cri.go:89] found id: ""
	I1205 07:50:21.560453  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.560462  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:21.560472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:21.560530  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:21.584825  299667 cri.go:89] found id: ""
	I1205 07:50:21.584852  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.584860  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:21.584867  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:21.584934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:21.623066  299667 cri.go:89] found id: ""
	I1205 07:50:21.623093  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.623102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:21.623112  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:21.623127  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:21.687398  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:21.687435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:21.702122  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:21.702149  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:21.767031  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:21.767050  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:21.767063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:21.791862  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:21.791895  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.321349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:24.331708  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:24.331778  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:24.369231  299667 cri.go:89] found id: ""
	I1205 07:50:24.369255  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.369264  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:24.369270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:24.369345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:24.397058  299667 cri.go:89] found id: ""
	I1205 07:50:24.397078  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.397088  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:24.397094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:24.397152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:24.425233  299667 cri.go:89] found id: ""
	I1205 07:50:24.425256  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.425264  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:24.425271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:24.425325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:24.451011  299667 cri.go:89] found id: ""
	I1205 07:50:24.451032  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.451041  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:24.451047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:24.451103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:24.475249  299667 cri.go:89] found id: ""
	I1205 07:50:24.475278  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.475287  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:24.475294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:24.475352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:24.500860  299667 cri.go:89] found id: ""
	I1205 07:50:24.500885  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.500895  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:24.500911  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:24.500969  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:24.525728  299667 cri.go:89] found id: ""
	I1205 07:50:24.525751  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.525771  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:24.525778  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:24.525839  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:24.549854  299667 cri.go:89] found id: ""
	I1205 07:50:24.549877  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.549885  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:24.549894  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:24.549923  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:24.574340  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:24.574371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.609821  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:24.609850  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:24.668879  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:24.668917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:24.683025  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:24.683052  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:24.745503  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:27.102442  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:29.102897  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:27.247317  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:27.258551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:27.258627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:27.282556  299667 cri.go:89] found id: ""
	I1205 07:50:27.282584  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.282594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:27.282601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:27.282685  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:27.311566  299667 cri.go:89] found id: ""
	I1205 07:50:27.311593  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.311602  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:27.311608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:27.311666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:27.336201  299667 cri.go:89] found id: ""
	I1205 07:50:27.336226  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.336235  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:27.336241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:27.336295  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:27.374655  299667 cri.go:89] found id: ""
	I1205 07:50:27.374733  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.374756  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:27.374804  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:27.374881  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:27.403358  299667 cri.go:89] found id: ""
	I1205 07:50:27.403381  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.403390  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:27.403396  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:27.403453  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:27.434322  299667 cri.go:89] found id: ""
	I1205 07:50:27.434347  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.434355  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:27.434362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:27.434430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:27.458621  299667 cri.go:89] found id: ""
	I1205 07:50:27.458643  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.458651  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:27.458669  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:27.458726  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:27.487490  299667 cri.go:89] found id: ""
	I1205 07:50:27.487514  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.487524  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:27.487532  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:27.487543  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:27.515434  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:27.515462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:27.574832  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:27.574864  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:27.588186  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:27.588210  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:27.666339  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:27.666400  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:27.666420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:50:31.602443  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:34.102266  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:30.192057  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:30.203579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:30.203657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:30.233613  299667 cri.go:89] found id: ""
	I1205 07:50:30.233663  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.233673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:30.233680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:30.233739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:30.262491  299667 cri.go:89] found id: ""
	I1205 07:50:30.262517  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.262526  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:30.262532  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:30.262599  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:30.292006  299667 cri.go:89] found id: ""
	I1205 07:50:30.292031  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.292042  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:30.292078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:30.292134  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:30.317938  299667 cri.go:89] found id: ""
	I1205 07:50:30.317963  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.317972  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:30.317979  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:30.318037  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:30.359844  299667 cri.go:89] found id: ""
	I1205 07:50:30.359871  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.359880  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:30.359887  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:30.359946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:30.391160  299667 cri.go:89] found id: ""
	I1205 07:50:30.391187  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.391196  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:30.391202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:30.391256  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:30.424091  299667 cri.go:89] found id: ""
	I1205 07:50:30.424116  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.424124  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:30.424131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:30.424186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:30.449137  299667 cri.go:89] found id: ""
	I1205 07:50:30.449184  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.449193  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:30.449204  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:30.449216  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:30.477964  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:30.477990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:30.535174  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:30.535208  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:30.548511  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:30.548537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:30.611856  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:30.611880  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:30.611892  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.137527  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:33.148376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:33.148457  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:33.173779  299667 cri.go:89] found id: ""
	I1205 07:50:33.173802  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.173810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:33.173816  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:33.173893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:33.198637  299667 cri.go:89] found id: ""
	I1205 07:50:33.198661  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.198671  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:33.198678  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:33.198739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:33.227950  299667 cri.go:89] found id: ""
	I1205 07:50:33.227972  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.227980  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:33.227986  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:33.228056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:33.252400  299667 cri.go:89] found id: ""
	I1205 07:50:33.252434  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.252446  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:33.252454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:33.252528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:33.277287  299667 cri.go:89] found id: ""
	I1205 07:50:33.277311  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.277320  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:33.277326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:33.277384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:33.303260  299667 cri.go:89] found id: ""
	I1205 07:50:33.303285  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.303294  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:33.303310  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:33.303387  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:33.327837  299667 cri.go:89] found id: ""
	I1205 07:50:33.327860  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.327868  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:33.327875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:33.327934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:33.361138  299667 cri.go:89] found id: ""
	I1205 07:50:33.361196  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.361206  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:33.361216  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:33.361227  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:33.439490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:33.439534  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:33.454134  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:33.454201  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:33.519248  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:33.519324  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:33.519346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.544362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:33.544404  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:36.102706  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:38.602248  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:36.073913  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:36.085180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:36.085254  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:36.111524  299667 cri.go:89] found id: ""
	I1205 07:50:36.111549  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.111558  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:36.111565  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:36.111624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:36.136758  299667 cri.go:89] found id: ""
	I1205 07:50:36.136832  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.136856  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:36.136874  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:36.136999  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:36.170081  299667 cri.go:89] found id: ""
	I1205 07:50:36.170105  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.170113  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:36.170120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:36.170177  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:36.194713  299667 cri.go:89] found id: ""
	I1205 07:50:36.194738  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.194747  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:36.194753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:36.194817  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:36.219168  299667 cri.go:89] found id: ""
	I1205 07:50:36.219190  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.219199  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:36.219205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:36.219272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:36.243582  299667 cri.go:89] found id: ""
	I1205 07:50:36.243653  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.243676  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:36.243694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:36.243775  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:36.268659  299667 cri.go:89] found id: ""
	I1205 07:50:36.268730  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.268754  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:36.268771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:36.268853  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:36.293268  299667 cri.go:89] found id: ""
	I1205 07:50:36.293338  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.293361  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:36.293383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:36.293416  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:36.372932  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:36.372960  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:36.372972  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:36.400267  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:36.400358  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:36.432348  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:36.432371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:36.488499  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:36.488533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.002493  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:39.016301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:39.016371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:39.041723  299667 cri.go:89] found id: ""
	I1205 07:50:39.041799  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.041815  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:39.041823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:39.041885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:39.066151  299667 cri.go:89] found id: ""
	I1205 07:50:39.066174  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.066183  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:39.066189  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:39.066266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:39.090650  299667 cri.go:89] found id: ""
	I1205 07:50:39.090673  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.090682  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:39.090688  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:39.090745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:39.119700  299667 cri.go:89] found id: ""
	I1205 07:50:39.119732  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.119740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:39.119747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:39.119810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:39.144307  299667 cri.go:89] found id: ""
	I1205 07:50:39.144369  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.144389  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:39.144406  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:39.144488  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:39.171025  299667 cri.go:89] found id: ""
	I1205 07:50:39.171048  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.171057  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:39.171063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:39.171127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:39.195100  299667 cri.go:89] found id: ""
	I1205 07:50:39.195121  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.195130  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:39.195136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:39.195197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:39.218959  299667 cri.go:89] found id: ""
	I1205 07:50:39.218980  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.218991  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:39.219000  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:39.219010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:39.243315  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:39.243346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:39.270633  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:39.270709  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:39.330141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:39.330172  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.345855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:39.345883  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:39.426940  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:40.603240  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:43.103156  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:41.928763  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:41.939293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:41.939415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:41.964816  299667 cri.go:89] found id: ""
	I1205 07:50:41.964850  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.964859  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:41.964865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:41.964931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:41.990880  299667 cri.go:89] found id: ""
	I1205 07:50:41.990914  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.990923  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:41.990929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:41.990996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:42.022456  299667 cri.go:89] found id: ""
	I1205 07:50:42.022483  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.022494  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:42.022501  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:42.022570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:42.049261  299667 cri.go:89] found id: ""
	I1205 07:50:42.049328  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.049352  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:42.049369  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:42.049446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:42.077034  299667 cri.go:89] found id: ""
	I1205 07:50:42.077108  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.077134  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:42.077255  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:42.077338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:42.114881  299667 cri.go:89] found id: ""
	I1205 07:50:42.114910  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.114921  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:42.114928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:42.114994  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:42.151897  299667 cri.go:89] found id: ""
	I1205 07:50:42.151926  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.151936  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:42.151944  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:42.152012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:42.185532  299667 cri.go:89] found id: ""
	I1205 07:50:42.185556  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.185565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:42.185574  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:42.185585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:42.246490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:42.246537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:42.262324  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:42.262359  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:42.331135  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:42.331201  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:42.331219  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:42.358803  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:42.358836  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:44.909321  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:44.920001  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:44.920070  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:44.945367  299667 cri.go:89] found id: ""
	I1205 07:50:44.945392  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.945401  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:44.945407  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:44.945463  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:44.970751  299667 cri.go:89] found id: ""
	I1205 07:50:44.970779  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.970788  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:44.970794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:44.970873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:44.999654  299667 cri.go:89] found id: ""
	I1205 07:50:44.999678  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.999688  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:44.999694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:44.999760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:45.065387  299667 cri.go:89] found id: ""
	I1205 07:50:45.065496  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.065521  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:45.065554  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:45.065661  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1205 07:50:45.105072  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:47.602920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:45.101338  299667 cri.go:89] found id: ""
	I1205 07:50:45.101365  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.101375  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:45.101386  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:45.101459  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:45.140148  299667 cri.go:89] found id: ""
	I1205 07:50:45.140181  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.140192  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:45.140200  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:45.140301  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:45.178981  299667 cri.go:89] found id: ""
	I1205 07:50:45.179025  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.179035  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:45.179043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:45.179176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:45.219922  299667 cri.go:89] found id: ""
	I1205 07:50:45.219949  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.219958  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:45.219969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:45.219989  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:45.291787  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:45.291824  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:45.306539  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:45.306565  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:45.383110  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:45.383171  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:45.383206  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:45.410722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:45.410808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:47.941304  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:47.952011  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:47.952084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:47.978179  299667 cri.go:89] found id: ""
	I1205 07:50:47.978201  299667 logs.go:282] 0 containers: []
	W1205 07:50:47.978210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:47.978216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:47.978274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:48.005927  299667 cri.go:89] found id: ""
	I1205 07:50:48.005954  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.005964  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:48.005971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:48.006042  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:48.040049  299667 cri.go:89] found id: ""
	I1205 07:50:48.040133  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.040156  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:48.040175  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:48.040269  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:48.066524  299667 cri.go:89] found id: ""
	I1205 07:50:48.066549  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.066558  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:48.066564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:48.066627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:48.096997  299667 cri.go:89] found id: ""
	I1205 07:50:48.097026  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.097036  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:48.097043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:48.097103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:48.123968  299667 cri.go:89] found id: ""
	I1205 07:50:48.123990  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.123999  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:48.124005  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:48.124066  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:48.151529  299667 cri.go:89] found id: ""
	I1205 07:50:48.151554  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.151564  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:48.151570  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:48.151629  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:48.181245  299667 cri.go:89] found id: ""
	I1205 07:50:48.181270  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.181279  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:48.181297  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:48.181308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:48.240786  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:48.240832  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:48.255504  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:48.255533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:48.325828  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:48.325849  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:48.325862  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:48.350818  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:48.350898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:50.103331  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:52.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:50.887376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:50.898712  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:50.898787  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:50.926387  299667 cri.go:89] found id: ""
	I1205 07:50:50.926412  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.926421  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:50.926428  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:50.926499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:50.951318  299667 cri.go:89] found id: ""
	I1205 07:50:50.951341  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.951349  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:50.951356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:50.951431  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:50.978509  299667 cri.go:89] found id: ""
	I1205 07:50:50.978536  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.978545  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:50.978551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:50.978614  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:51.017851  299667 cri.go:89] found id: ""
	I1205 07:50:51.017875  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.017884  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:51.017894  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:51.017957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:51.048705  299667 cri.go:89] found id: ""
	I1205 07:50:51.048772  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.048797  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:51.048815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:51.048901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:51.078364  299667 cri.go:89] found id: ""
	I1205 07:50:51.078427  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.078448  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:51.078468  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:51.078560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:51.110914  299667 cri.go:89] found id: ""
	I1205 07:50:51.110955  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.110965  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:51.110970  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:51.111064  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:51.136737  299667 cri.go:89] found id: ""
	I1205 07:50:51.136762  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.136771  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:51.136781  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:51.136793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:51.197928  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:51.197949  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:51.197961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:51.222938  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:51.222968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:51.253887  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:51.253914  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:51.309729  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:51.309759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:53.824280  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:53.834821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:53.834895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:53.882567  299667 cri.go:89] found id: ""
	I1205 07:50:53.882607  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.882617  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:53.882623  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:53.882708  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:53.924413  299667 cri.go:89] found id: ""
	I1205 07:50:53.924439  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.924447  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:53.924454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:53.924521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:53.949296  299667 cri.go:89] found id: ""
	I1205 07:50:53.949329  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.949339  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:53.949345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:53.949421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:53.973974  299667 cri.go:89] found id: ""
	I1205 07:50:53.974036  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.974050  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:53.974058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:53.974114  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:53.999073  299667 cri.go:89] found id: ""
	I1205 07:50:53.999139  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.999154  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:53.999162  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:53.999221  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:54.026401  299667 cri.go:89] found id: ""
	I1205 07:50:54.026425  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.026434  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:54.026441  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:54.026523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:54.056156  299667 cri.go:89] found id: ""
	I1205 07:50:54.056181  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.056191  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:54.056197  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:54.056266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:54.080916  299667 cri.go:89] found id: ""
	I1205 07:50:54.080955  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.080964  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:54.080973  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:54.080985  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:54.105836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:54.105870  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:54.134673  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:54.134702  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:54.191141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:54.191175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:54.204290  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:54.204332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:54.267087  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:55.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:57.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:59.602402  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:56.768821  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:56.779222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:56.779288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:56.807155  299667 cri.go:89] found id: ""
	I1205 07:50:56.807179  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.807188  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:56.807195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:56.807280  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:56.831710  299667 cri.go:89] found id: ""
	I1205 07:50:56.831737  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.831746  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:56.831753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:56.831812  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:56.867145  299667 cri.go:89] found id: ""
	I1205 07:50:56.867169  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.867178  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:56.867185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:56.867243  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:56.893127  299667 cri.go:89] found id: ""
	I1205 07:50:56.893152  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.893174  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:56.893180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:56.893237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:56.922421  299667 cri.go:89] found id: ""
	I1205 07:50:56.922450  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.922460  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:56.922466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:56.922543  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:56.945778  299667 cri.go:89] found id: ""
	I1205 07:50:56.945808  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.945817  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:56.945823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:56.945907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:56.974442  299667 cri.go:89] found id: ""
	I1205 07:50:56.974473  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.974482  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:56.974489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:56.974559  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:56.998662  299667 cri.go:89] found id: ""
	I1205 07:50:56.998685  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.998694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:56.998703  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:56.998715  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:57.058833  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:57.058867  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:57.072293  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:57.072322  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:57.139010  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:57.139030  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:57.139042  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:57.163607  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:57.163639  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.693334  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:59.704756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:59.704870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:59.732171  299667 cri.go:89] found id: ""
	I1205 07:50:59.732198  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.732208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:59.732214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:59.732272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:59.757954  299667 cri.go:89] found id: ""
	I1205 07:50:59.757981  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.757990  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:59.757996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:59.758076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:59.787824  299667 cri.go:89] found id: ""
	I1205 07:50:59.787846  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.787855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:59.787862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:59.787977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:59.813474  299667 cri.go:89] found id: ""
	I1205 07:50:59.813497  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.813506  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:59.813512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:59.813580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:59.842057  299667 cri.go:89] found id: ""
	I1205 07:50:59.842079  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.842088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:59.842094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:59.842162  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:59.872569  299667 cri.go:89] found id: ""
	I1205 07:50:59.872593  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.872602  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:59.872608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:59.872671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:59.905410  299667 cri.go:89] found id: ""
	I1205 07:50:59.905435  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.905443  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:59.905450  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:59.905514  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:59.932703  299667 cri.go:89] found id: ""
	I1205 07:50:59.932744  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.932754  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:59.932763  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:59.932774  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.964043  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:59.964069  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:00.020877  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:00.023486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:00.055130  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:00.055166  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:02.102411  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:04.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:00.182237  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:00.182280  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:00.182298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:02.739834  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:02.750886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:02.750958  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:02.776293  299667 cri.go:89] found id: ""
	I1205 07:51:02.776319  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.776328  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:02.776334  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:02.776393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:02.803043  299667 cri.go:89] found id: ""
	I1205 07:51:02.803080  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.803089  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:02.803096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:02.803176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:02.827935  299667 cri.go:89] found id: ""
	I1205 07:51:02.827957  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.827966  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:02.827972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:02.828031  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:02.859181  299667 cri.go:89] found id: ""
	I1205 07:51:02.859204  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.859215  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:02.859222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:02.859282  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:02.893626  299667 cri.go:89] found id: ""
	I1205 07:51:02.893668  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.893678  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:02.893685  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:02.893755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:02.924778  299667 cri.go:89] found id: ""
	I1205 07:51:02.924808  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.924818  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:02.924830  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:02.924890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:02.950184  299667 cri.go:89] found id: ""
	I1205 07:51:02.950211  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.950220  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:02.950229  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:02.950288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:02.976829  299667 cri.go:89] found id: ""
	I1205 07:51:02.976855  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.976865  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:02.976874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:02.976885  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:03.015998  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:03.016071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:03.072438  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:03.072473  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:03.087250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:03.087283  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:03.153281  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:03.153306  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:03.153319  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:51:07.103249  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:09.602341  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:05.678289  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:05.688964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:05.689032  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:05.714382  299667 cri.go:89] found id: ""
	I1205 07:51:05.714403  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.714412  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:05.714419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:05.714486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:05.743946  299667 cri.go:89] found id: ""
	I1205 07:51:05.743968  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.743976  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:05.743983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:05.744043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:05.768270  299667 cri.go:89] found id: ""
	I1205 07:51:05.768293  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.768303  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:05.768309  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:05.768367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:05.795557  299667 cri.go:89] found id: ""
	I1205 07:51:05.795580  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.795588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:05.795595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:05.795652  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:05.820607  299667 cri.go:89] found id: ""
	I1205 07:51:05.820634  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.820643  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:05.820649  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:05.820707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:05.853624  299667 cri.go:89] found id: ""
	I1205 07:51:05.853648  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.853657  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:05.853670  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:05.853752  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:05.885144  299667 cri.go:89] found id: ""
	I1205 07:51:05.885200  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.885213  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:05.885219  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:05.885296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:05.917755  299667 cri.go:89] found id: ""
	I1205 07:51:05.917777  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.917785  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:05.917794  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:05.917808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:05.978242  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:05.978286  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:05.992931  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:05.992961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:06.070949  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:06.070979  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:06.070992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:06.096749  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:06.096780  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.634532  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:08.646959  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:08.647038  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:08.678851  299667 cri.go:89] found id: ""
	I1205 07:51:08.678875  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.678884  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:08.678890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:08.678954  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:08.702970  299667 cri.go:89] found id: ""
	I1205 07:51:08.702992  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.703001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:08.703006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:08.703063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:08.727238  299667 cri.go:89] found id: ""
	I1205 07:51:08.727259  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.727267  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:08.727273  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:08.727329  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:08.752084  299667 cri.go:89] found id: ""
	I1205 07:51:08.752106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.752114  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:08.752120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:08.752183  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:08.775775  299667 cri.go:89] found id: ""
	I1205 07:51:08.775797  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.775805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:08.775811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:08.775878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:08.800101  299667 cri.go:89] found id: ""
	I1205 07:51:08.800122  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.800130  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:08.800136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:08.800193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:08.826081  299667 cri.go:89] found id: ""
	I1205 07:51:08.826106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.826115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:08.826121  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:08.826179  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:08.850937  299667 cri.go:89] found id: ""
	I1205 07:51:08.850969  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.850979  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:08.850987  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:08.851004  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.884057  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:08.884093  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:08.946750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:08.946793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:08.960852  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:08.960880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:09.030565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:09.030587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:09.030601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:51:11.602638  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:12.602298  297527 node_ready.go:38] duration metric: took 6m0.000452624s for node "no-preload-241270" to be "Ready" ...
	I1205 07:51:12.605551  297527 out.go:203] 
	W1205 07:51:12.608371  297527 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 07:51:12.608388  297527 out.go:285] * 
	W1205 07:51:12.610554  297527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:51:12.612665  297527 out.go:203] 
	I1205 07:51:11.556651  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:11.567626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:11.567701  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:11.595760  299667 cri.go:89] found id: ""
	I1205 07:51:11.595786  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.595795  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:11.595802  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:11.595859  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:11.646030  299667 cri.go:89] found id: ""
	I1205 07:51:11.646056  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.646065  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:11.646072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:11.646138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:11.675282  299667 cri.go:89] found id: ""
	I1205 07:51:11.675310  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.675319  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:11.675325  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:11.675385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:11.699688  299667 cri.go:89] found id: ""
	I1205 07:51:11.699712  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.699721  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:11.699727  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:11.699791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:11.723819  299667 cri.go:89] found id: ""
	I1205 07:51:11.723843  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.723852  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:11.723859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:11.723915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:11.751470  299667 cri.go:89] found id: ""
	I1205 07:51:11.751496  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.751505  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:11.751512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:11.751568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:11.775893  299667 cri.go:89] found id: ""
	I1205 07:51:11.775921  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.775929  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:11.775936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:11.775993  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:11.802990  299667 cri.go:89] found id: ""
	I1205 07:51:11.803012  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.803021  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:11.803033  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:11.803044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:11.859684  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:11.859767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:11.876859  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:11.876889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:11.952118  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:11.944168   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.944893   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.946566   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.947157   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.948800   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:11.944168   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.944893   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.946566   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.947157   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.948800   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:11.952191  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:11.952220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:11.976596  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:11.976630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:14.510895  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:14.522084  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:14.522151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:14.554050  299667 cri.go:89] found id: ""
	I1205 07:51:14.554069  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.554078  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:14.554084  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:14.554139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:14.581712  299667 cri.go:89] found id: ""
	I1205 07:51:14.581732  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.581740  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:14.581746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:14.581810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:14.658701  299667 cri.go:89] found id: ""
	I1205 07:51:14.658723  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.658731  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:14.658737  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:14.658803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:14.686921  299667 cri.go:89] found id: ""
	I1205 07:51:14.686940  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.686948  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:14.686954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:14.687024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:14.720928  299667 cri.go:89] found id: ""
	I1205 07:51:14.720949  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.720957  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:14.720972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:14.721046  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:14.758959  299667 cri.go:89] found id: ""
	I1205 07:51:14.758983  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.758992  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:14.758998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:14.759054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:14.810754  299667 cri.go:89] found id: ""
	I1205 07:51:14.810775  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.810888  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:14.810895  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:14.810966  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:14.865350  299667 cri.go:89] found id: ""
	I1205 07:51:14.865369  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.865379  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:14.865387  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:14.865398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:14.920139  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:14.920170  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:14.973197  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:14.973224  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:15.042929  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:15.042968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:15.069350  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:15.069377  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:15.167229  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:15.157061   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.158379   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.159455   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.160498   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.161615   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:15.157061   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.158379   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.159455   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.160498   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.161615   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:17.667454  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:17.677695  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:17.677767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:17.710656  299667 cri.go:89] found id: ""
	I1205 07:51:17.710678  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.710687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:17.710693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:17.710755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:17.738643  299667 cri.go:89] found id: ""
	I1205 07:51:17.738665  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.738674  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:17.738680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:17.738736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:17.762784  299667 cri.go:89] found id: ""
	I1205 07:51:17.762806  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.762815  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:17.762821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:17.762880  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:17.788678  299667 cri.go:89] found id: ""
	I1205 07:51:17.788699  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.788714  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:17.788720  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:17.788776  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:17.818009  299667 cri.go:89] found id: ""
	I1205 07:51:17.818031  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.818040  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:17.818046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:17.818103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:17.850251  299667 cri.go:89] found id: ""
	I1205 07:51:17.850272  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.850288  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:17.850295  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:17.850354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:17.879482  299667 cri.go:89] found id: ""
	I1205 07:51:17.879503  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.879512  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:17.879518  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:17.879579  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:17.916240  299667 cri.go:89] found id: ""
	I1205 07:51:17.916261  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.916270  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:17.916278  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:17.916344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:17.945888  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:17.945915  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:18.004030  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:18.004079  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:18.022346  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:18.022422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:18.096445  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:18.087987   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.088572   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090232   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090775   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.092338   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:18.087987   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.088572   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090232   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090775   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.092338   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:18.096468  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:18.096481  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:20.623691  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:20.635279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:20.635409  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:20.670295  299667 cri.go:89] found id: ""
	I1205 07:51:20.670369  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.670390  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:20.670410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:20.670493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:20.701924  299667 cri.go:89] found id: ""
	I1205 07:51:20.701948  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.701957  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:20.701964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:20.702055  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:20.727557  299667 cri.go:89] found id: ""
	I1205 07:51:20.727599  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.727622  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:20.727638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:20.727714  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:20.753615  299667 cri.go:89] found id: ""
	I1205 07:51:20.753640  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.753648  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:20.753655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:20.753744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:20.778426  299667 cri.go:89] found id: ""
	I1205 07:51:20.778450  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.778459  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:20.778466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:20.778556  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:20.803580  299667 cri.go:89] found id: ""
	I1205 07:51:20.803605  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.803615  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:20.803638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:20.803707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:20.833142  299667 cri.go:89] found id: ""
	I1205 07:51:20.833193  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.833202  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:20.833208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:20.833285  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:20.868368  299667 cri.go:89] found id: ""
	I1205 07:51:20.868443  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.868465  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:20.868486  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:20.868523  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:20.895451  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:20.895524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:20.926652  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:20.926677  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:20.981657  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:20.981692  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:20.995302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:20.995329  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:21.064074  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:21.055838   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.056503   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.058334   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.059023   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.060931   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:21.055838   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.056503   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.058334   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.059023   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.060931   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:23.564875  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:23.575583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:23.575650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:23.616208  299667 cri.go:89] found id: ""
	I1205 07:51:23.616234  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.616243  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:23.616251  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:23.616314  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:23.645044  299667 cri.go:89] found id: ""
	I1205 07:51:23.645068  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.645077  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:23.645083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:23.645148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:23.679840  299667 cri.go:89] found id: ""
	I1205 07:51:23.679861  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.679870  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:23.679876  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:23.679931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:23.704932  299667 cri.go:89] found id: ""
	I1205 07:51:23.704954  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.704962  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:23.704980  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:23.705040  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:23.730380  299667 cri.go:89] found id: ""
	I1205 07:51:23.730403  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.730411  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:23.730418  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:23.730483  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:23.754200  299667 cri.go:89] found id: ""
	I1205 07:51:23.754224  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.754233  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:23.754240  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:23.754318  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:23.778888  299667 cri.go:89] found id: ""
	I1205 07:51:23.778913  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.778921  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:23.778927  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:23.778983  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:23.803021  299667 cri.go:89] found id: ""
	I1205 07:51:23.803045  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.803054  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:23.803063  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:23.803074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:23.859725  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:23.859805  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:23.878639  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:23.878714  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:23.953245  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:23.945764   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.946559   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948198   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948513   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.950053   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:23.945764   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.946559   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948198   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948513   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.950053   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:23.953267  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:23.953280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:23.978428  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:23.978460  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:26.510161  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:26.520589  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:26.520663  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:26.545475  299667 cri.go:89] found id: ""
	I1205 07:51:26.545500  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.545508  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:26.545515  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:26.545570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:26.570378  299667 cri.go:89] found id: ""
	I1205 07:51:26.570401  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.570409  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:26.570416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:26.570476  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:26.596521  299667 cri.go:89] found id: ""
	I1205 07:51:26.596547  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.596556  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:26.596562  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:26.596618  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:26.624228  299667 cri.go:89] found id: ""
	I1205 07:51:26.624255  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.624264  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:26.624280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:26.624336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:26.650763  299667 cri.go:89] found id: ""
	I1205 07:51:26.650797  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.650807  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:26.650813  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:26.650870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:26.681944  299667 cri.go:89] found id: ""
	I1205 07:51:26.681972  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.681980  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:26.681987  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:26.682043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:26.706897  299667 cri.go:89] found id: ""
	I1205 07:51:26.706918  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.706927  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:26.706933  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:26.706991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:26.732536  299667 cri.go:89] found id: ""
	I1205 07:51:26.732560  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.732569  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:26.732578  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:26.732619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:26.789640  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:26.789673  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:26.803060  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:26.803089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:26.884697  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:26.872770   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.877391   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879063   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879460   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.881003   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:26.872770   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.877391   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879063   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879460   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.881003   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:26.884720  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:26.884737  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:26.912821  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:26.912856  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:29.445153  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:29.455673  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:29.455740  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:29.479669  299667 cri.go:89] found id: ""
	I1205 07:51:29.479694  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.479702  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:29.479709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:29.479768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:29.504129  299667 cri.go:89] found id: ""
	I1205 07:51:29.504151  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.504160  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:29.504166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:29.504223  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:29.528037  299667 cri.go:89] found id: ""
	I1205 07:51:29.528061  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.528071  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:29.528077  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:29.528137  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:29.553104  299667 cri.go:89] found id: ""
	I1205 07:51:29.553129  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.553138  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:29.553145  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:29.553252  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:29.582155  299667 cri.go:89] found id: ""
	I1205 07:51:29.582180  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.582189  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:29.582195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:29.582251  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:29.616156  299667 cri.go:89] found id: ""
	I1205 07:51:29.616181  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.616190  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:29.616205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:29.616279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:29.643373  299667 cri.go:89] found id: ""
	I1205 07:51:29.643399  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.643407  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:29.643413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:29.643474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:29.669624  299667 cri.go:89] found id: ""
	I1205 07:51:29.669649  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.669658  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:29.669667  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:29.669678  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:29.725864  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:29.725897  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:29.739284  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:29.739311  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:29.812338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:29.804736   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.805417   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807055   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807553   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.809095   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:29.804736   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.805417   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807055   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807553   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.809095   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:29.812358  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:29.812371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:29.837776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:29.837808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:32.374773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:32.385440  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:32.385519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:32.410264  299667 cri.go:89] found id: ""
	I1205 07:51:32.410285  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.410294  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:32.410301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:32.410380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:32.435693  299667 cri.go:89] found id: ""
	I1205 07:51:32.435716  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.435724  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:32.435730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:32.435789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:32.459782  299667 cri.go:89] found id: ""
	I1205 07:51:32.459854  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.459865  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:32.459872  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:32.460140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:32.490196  299667 cri.go:89] found id: ""
	I1205 07:51:32.490221  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.490230  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:32.490236  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:32.490302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:32.515432  299667 cri.go:89] found id: ""
	I1205 07:51:32.515456  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.515465  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:32.515472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:32.515535  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:32.544631  299667 cri.go:89] found id: ""
	I1205 07:51:32.544657  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.544666  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:32.544672  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:32.544733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:32.568734  299667 cri.go:89] found id: ""
	I1205 07:51:32.568759  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.568768  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:32.568785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:32.568841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:32.593347  299667 cri.go:89] found id: ""
	I1205 07:51:32.593375  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.593385  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:32.593394  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:32.593406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:32.663939  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:32.663975  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:32.678486  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:32.678514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:32.740819  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:32.733560   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.734160   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.735620   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.736048   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.737671   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:32.733560   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.734160   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.735620   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.736048   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.737671   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:32.740842  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:32.740854  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:32.765510  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:32.765539  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:35.296522  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:35.310277  299667 out.go:203] 
	W1205 07:51:35.313261  299667 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 07:51:35.313316  299667 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 07:51:35.313333  299667 out.go:285] * Related issues:
	W1205 07:51:35.313353  299667 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1205 07:51:35.313373  299667 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1205 07:51:35.316371  299667 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209287352Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209303147Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209319738Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209338060Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209354355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209371619Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209407246Z" level=info msg="runtime interface created"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209414106Z" level=info msg="created NRI interface"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209431698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209473470Z" level=info msg="Connect containerd service"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209745990Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.210997942Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227442652Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227515662Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227533837Z" level=info msg="Start subscribing containerd event"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227584988Z" level=info msg="Start recovering state"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248902324Z" level=info msg="Start event monitor"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248944278Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248954567Z" level=info msg="Start streaming server"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248967343Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248975425Z" level=info msg="runtime interface starting up..."
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248982071Z" level=info msg="starting plugins..."
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.249010797Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.249144378Z" level=info msg="containerd successfully booted in 0.058238s"
	Dec 05 07:45:31 newest-cni-622440 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:38.579131   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:38.579647   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:38.581361   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:38.581849   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:38.583414   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:51:38 up  2:34,  0 user,  load average: 0.83, 0.78, 1.29
	Linux newest-cni-622440 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:51:35 newest-cni-622440 kubelet[13347]: E1205 07:51:35.662524   13347 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:35 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:35 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:36 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 05 07:51:36 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:36 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:36 newest-cni-622440 kubelet[13353]: E1205 07:51:36.419306   13353 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:36 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:36 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:37 newest-cni-622440 kubelet[13373]: E1205 07:51:37.153975   13373 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:37 newest-cni-622440 kubelet[13379]: E1205 07:51:37.925975   13379 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:37 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:38 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 05 07:51:38 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:38 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:38 newest-cni-622440 kubelet[13478]: E1205 07:51:38.659781   13478 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (358.79857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-622440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (375.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:51:29.377205    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:52:16.967905    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:52:44.878960    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:53:01.797034    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:53:11.309193    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:53:40.041882    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:54:14.019463    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:55:06.311146    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1205 07:56:21.839655    4192 config.go:182] Loaded profile config "calico-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:57:16.967961    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1205 07:57:50.750128    4192 config.go:182] Loaded profile config "custom-flannel-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:01.797585    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:11.309416    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:14.509439    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:58:14.516778    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:58:14.528167    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:58:14.549589    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:58:14.590993    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:58:14.673066    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:14.834991    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:58:15.157365    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:15.798725    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:17.080051    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:24.762827    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:35.004628    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:55.485862    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:58:57.097054    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1205 07:59:06.898315    4192 config.go:182] Loaded profile config "enable-default-cni-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:59:14.019386    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:59:34.042788    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:34.049112    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:34.060429    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:34.082427    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:34.123787    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:34.205180    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:34.373134    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:34.695049    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:59:36.448144    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:59:36.618626    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:59:39.180864    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:59:44.302855    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 07:59:54.545103    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:00:06.311124    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/old-k8s-version-943366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:00:15.027052    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 2 (314.120961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-241270
helpers_test.go:243: (dbg) docker inspect no-preload-241270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	        "Created": "2025-12-05T07:34:52.488952391Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297658,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:45:04.977832919Z",
	            "FinishedAt": "2025-12-05T07:45:03.670727358Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hosts",
	        "LogPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896-json.log",
	        "Name": "/no-preload-241270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	                "LowerDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-241270",
	                "Source": "/var/lib/docker/volumes/no-preload-241270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241270",
	                "name.minikube.sigs.k8s.io": "no-preload-241270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a57e08b617e6c99db8e0606f807966baa2265951deec9d7f31b28b674772ba7",
	            "SandboxKey": "/var/run/docker/netns/6a57e08b617e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:5e:e9:4a:59:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "509cbc0434c71e77097af60a2b0ce9a4473551172a41d0f484ec4e134db3ab73",
	                    "EndpointID": "8aadf1070cfccbd0175d1614c4a1ee7cb617e6ca8ef7cab3c7e2ce89af3cf831",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241270",
	                        "419e4a267ba5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270: exit status 2 (355.074561ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241270 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl status kubelet --all --full --no-pager                                                             │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl cat kubelet --no-pager                                                                             │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo journalctl -xeu kubelet --all --full --no-pager                                                              │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo cat /etc/kubernetes/kubelet.conf                                                                             │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo cat /var/lib/kubelet/config.yaml                                                                             │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl status docker --all --full --no-pager                                                              │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │                     │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl cat docker --no-pager                                                                              │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo cat /etc/docker/daemon.json                                                                                  │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │                     │
	│ ssh     │ -p enable-default-cni-183381 sudo docker system info                                                                                           │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │                     │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl status cri-docker --all --full --no-pager                                                          │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │                     │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl cat cri-docker --no-pager                                                                          │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                     │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │                     │
	│ ssh     │ -p enable-default-cni-183381 sudo cat /usr/lib/systemd/system/cri-docker.service                                                               │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo cri-dockerd --version                                                                                        │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl status containerd --all --full --no-pager                                                          │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl cat containerd --no-pager                                                                          │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo cat /lib/systemd/system/containerd.service                                                                   │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo cat /etc/containerd/config.toml                                                                              │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo containerd config dump                                                                                       │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl status crio --all --full --no-pager                                                                │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │                     │
	│ ssh     │ -p enable-default-cni-183381 sudo systemctl cat crio --no-pager                                                                                │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                      │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ ssh     │ -p enable-default-cni-183381 sudo crio config                                                                                                  │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ delete  │ -p enable-default-cni-183381                                                                                                                   │ enable-default-cni-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │ 05 Dec 25 07:59 UTC │
	│ start   │ -p flannel-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd │ flannel-183381            │ jenkins │ v1.37.0 │ 05 Dec 25 07:59 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:59:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:59:37.836314  353293 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:59:37.836428  353293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:59:37.836439  353293 out.go:374] Setting ErrFile to fd 2...
	I1205 07:59:37.836444  353293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:59:37.836686  353293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:59:37.837094  353293 out.go:368] Setting JSON to false
	I1205 07:59:37.838002  353293 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9725,"bootTime":1764911853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:59:37.838067  353293 start.go:143] virtualization:  
	I1205 07:59:37.842523  353293 out.go:179] * [flannel-183381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:59:37.845980  353293 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:59:37.846070  353293 notify.go:221] Checking for updates...
	I1205 07:59:37.852577  353293 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:59:37.855807  353293 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:59:37.858944  353293 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:59:37.862175  353293 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:59:37.865301  353293 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:59:37.868970  353293 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:59:37.869090  353293 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:59:37.896355  353293 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:59:37.896474  353293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:59:37.958760  353293 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:59:37.948384628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:59:37.958868  353293 docker.go:319] overlay module found
	I1205 07:59:37.962085  353293 out.go:179] * Using the docker driver based on user configuration
	I1205 07:59:37.965097  353293 start.go:309] selected driver: docker
	I1205 07:59:37.965119  353293 start.go:927] validating driver "docker" against <nil>
	I1205 07:59:37.965133  353293 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:59:37.965881  353293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:59:38.035877  353293 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:59:38.025840649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:59:38.036054  353293 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:59:38.036318  353293 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:59:38.039615  353293 out.go:179] * Using Docker driver with root privileges
	I1205 07:59:38.042573  353293 cni.go:84] Creating CNI manager for "flannel"
	I1205 07:59:38.042607  353293 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1205 07:59:38.042703  353293 start.go:353] cluster config:
	{Name:flannel-183381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-183381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:59:38.046075  353293 out.go:179] * Starting "flannel-183381" primary control-plane node in "flannel-183381" cluster
	I1205 07:59:38.049281  353293 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:59:38.052357  353293 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:59:38.055372  353293 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:59:38.055388  353293 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 07:59:38.055434  353293 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1205 07:59:38.055456  353293 cache.go:65] Caching tarball of preloaded images
	I1205 07:59:38.055547  353293 preload.go:238] Found /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 07:59:38.055561  353293 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1205 07:59:38.055671  353293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/config.json ...
	I1205 07:59:38.055693  353293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/config.json: {Name:mk7290056051e8f7a79b73190229477058252662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:59:38.079232  353293 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:59:38.079256  353293 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:59:38.079291  353293 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:59:38.079324  353293 start.go:360] acquireMachinesLock for flannel-183381: {Name:mke62ea731a85652156519ec34ddec23f3917266 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:59:38.079469  353293 start.go:364] duration metric: took 118.352µs to acquireMachinesLock for "flannel-183381"
	I1205 07:59:38.079509  353293 start.go:93] Provisioning new machine with config: &{Name:flannel-183381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-183381 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:59:38.079598  353293 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:59:38.083150  353293 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:59:38.083390  353293 start.go:159] libmachine.API.Create for "flannel-183381" (driver="docker")
	I1205 07:59:38.083421  353293 client.go:173] LocalClient.Create starting
	I1205 07:59:38.083509  353293 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 07:59:38.083562  353293 main.go:143] libmachine: Decoding PEM data...
	I1205 07:59:38.083581  353293 main.go:143] libmachine: Parsing certificate...
	I1205 07:59:38.083644  353293 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 07:59:38.083672  353293 main.go:143] libmachine: Decoding PEM data...
	I1205 07:59:38.083688  353293 main.go:143] libmachine: Parsing certificate...
	I1205 07:59:38.084161  353293 cli_runner.go:164] Run: docker network inspect flannel-183381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:59:38.108019  353293 cli_runner.go:211] docker network inspect flannel-183381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:59:38.108105  353293 network_create.go:284] running [docker network inspect flannel-183381] to gather additional debugging logs...
	I1205 07:59:38.108128  353293 cli_runner.go:164] Run: docker network inspect flannel-183381
	W1205 07:59:38.132223  353293 cli_runner.go:211] docker network inspect flannel-183381 returned with exit code 1
	I1205 07:59:38.132252  353293 network_create.go:287] error running [docker network inspect flannel-183381]: docker network inspect flannel-183381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-183381 not found
	I1205 07:59:38.132267  353293 network_create.go:289] output of [docker network inspect flannel-183381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-183381 not found
	
	** /stderr **
	I1205 07:59:38.132377  353293 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:59:38.163557  353293 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 07:59:38.163951  353293 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 07:59:38.164341  353293 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 07:59:38.164674  353293 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-509cbc0434c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:5b:c8:fd:a0:2d} reservation:<nil>}
	I1205 07:59:38.165117  353293 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a706d0}
	I1205 07:59:38.165139  353293 network_create.go:124] attempt to create docker network flannel-183381 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:59:38.165247  353293 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-183381 flannel-183381
	I1205 07:59:38.227711  353293 network_create.go:108] docker network flannel-183381 192.168.85.0/24 created
	I1205 07:59:38.227743  353293 kic.go:121] calculated static IP "192.168.85.2" for the "flannel-183381" container
	I1205 07:59:38.227816  353293 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:59:38.244102  353293 cli_runner.go:164] Run: docker volume create flannel-183381 --label name.minikube.sigs.k8s.io=flannel-183381 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:59:38.262152  353293 oci.go:103] Successfully created a docker volume flannel-183381
	I1205 07:59:38.262247  353293 cli_runner.go:164] Run: docker run --rm --name flannel-183381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-183381 --entrypoint /usr/bin/test -v flannel-183381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:59:38.793057  353293 oci.go:107] Successfully prepared a docker volume flannel-183381
	I1205 07:59:38.793123  353293 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 07:59:38.793133  353293 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 07:59:38.793233  353293 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-183381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 07:59:42.768186  353293 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-183381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.974904747s)
	I1205 07:59:42.768219  353293 kic.go:203] duration metric: took 3.975081848s to extract preloaded images to volume ...
	W1205 07:59:42.768364  353293 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:59:42.768476  353293 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:59:42.829729  353293 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-183381 --name flannel-183381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-183381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-183381 --network flannel-183381 --ip 192.168.85.2 --volume flannel-183381:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:59:43.136494  353293 cli_runner.go:164] Run: docker container inspect flannel-183381 --format={{.State.Running}}
	I1205 07:59:43.158832  353293 cli_runner.go:164] Run: docker container inspect flannel-183381 --format={{.State.Status}}
	I1205 07:59:43.182183  353293 cli_runner.go:164] Run: docker exec flannel-183381 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:59:43.236707  353293 oci.go:144] the created container "flannel-183381" has a running status.
	I1205 07:59:43.236734  353293 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa...
	I1205 07:59:43.332403  353293 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:59:43.358358  353293 cli_runner.go:164] Run: docker container inspect flannel-183381 --format={{.State.Status}}
	I1205 07:59:43.383463  353293 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:59:43.383488  353293 kic_runner.go:114] Args: [docker exec --privileged flannel-183381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:59:43.444038  353293 cli_runner.go:164] Run: docker container inspect flannel-183381 --format={{.State.Status}}
	I1205 07:59:43.466308  353293 machine.go:94] provisionDockerMachine start ...
	I1205 07:59:43.466406  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 07:59:43.490430  353293 main.go:143] libmachine: Using SSH client type: native
	I1205 07:59:43.490769  353293 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:59:43.490778  353293 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:59:43.491739  353293 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 07:59:46.644605  353293 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-183381
	
	I1205 07:59:46.644629  353293 ubuntu.go:182] provisioning hostname "flannel-183381"
	I1205 07:59:46.644700  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 07:59:46.667910  353293 main.go:143] libmachine: Using SSH client type: native
	I1205 07:59:46.668230  353293 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:59:46.668245  353293 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-183381 && echo "flannel-183381" | sudo tee /etc/hostname
	I1205 07:59:46.826393  353293 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-183381
	
	I1205 07:59:46.826469  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 07:59:46.844287  353293 main.go:143] libmachine: Using SSH client type: native
	I1205 07:59:46.844613  353293 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 07:59:46.844635  353293 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-183381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-183381/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-183381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:59:46.993240  353293 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:59:46.993268  353293 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:59:46.993296  353293 ubuntu.go:190] setting up certificates
	I1205 07:59:46.993305  353293 provision.go:84] configureAuth start
	I1205 07:59:46.993364  353293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-183381
	I1205 07:59:47.014309  353293 provision.go:143] copyHostCerts
	I1205 07:59:47.014385  353293 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:59:47.014397  353293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:59:47.014495  353293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:59:47.014601  353293 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:59:47.014611  353293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:59:47.014643  353293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:59:47.014715  353293 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:59:47.014724  353293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:59:47.014751  353293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:59:47.014813  353293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.flannel-183381 san=[127.0.0.1 192.168.85.2 flannel-183381 localhost minikube]
	I1205 07:59:47.365951  353293 provision.go:177] copyRemoteCerts
	I1205 07:59:47.366023  353293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:59:47.366065  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 07:59:47.382957  353293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa Username:docker}
	I1205 07:59:47.488785  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:59:47.506233  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1205 07:59:47.523518  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:59:47.543198  353293 provision.go:87] duration metric: took 549.867937ms to configureAuth
	I1205 07:59:47.543275  353293 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:59:47.543494  353293 config.go:182] Loaded profile config "flannel-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:59:47.543507  353293 machine.go:97] duration metric: took 4.077176206s to provisionDockerMachine
	I1205 07:59:47.543514  353293 client.go:176] duration metric: took 9.460081033s to LocalClient.Create
	I1205 07:59:47.543540  353293 start.go:167] duration metric: took 9.460152181s to libmachine.API.Create "flannel-183381"
	I1205 07:59:47.543553  353293 start.go:293] postStartSetup for "flannel-183381" (driver="docker")
	I1205 07:59:47.543562  353293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:59:47.543630  353293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:59:47.543682  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 07:59:47.561132  353293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa Username:docker}
	I1205 07:59:47.665074  353293 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:59:47.668216  353293 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:59:47.668244  353293 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:59:47.668255  353293 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:59:47.668304  353293 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:59:47.668383  353293 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:59:47.668488  353293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:59:47.675861  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:59:47.692301  353293 start.go:296] duration metric: took 148.733351ms for postStartSetup
	I1205 07:59:47.692704  353293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-183381
	I1205 07:59:47.709688  353293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/config.json ...
	I1205 07:59:47.709975  353293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:59:47.710025  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 07:59:47.732434  353293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa Username:docker}
	I1205 07:59:47.833893  353293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:59:47.838914  353293 start.go:128] duration metric: took 9.75930124s to createHost
	I1205 07:59:47.838937  353293 start.go:83] releasing machines lock for "flannel-183381", held for 9.759455473s
	I1205 07:59:47.839030  353293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-183381
	I1205 07:59:47.858908  353293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:59:47.858996  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 07:59:47.858913  353293 ssh_runner.go:195] Run: cat /version.json
	I1205 07:59:47.859279  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 07:59:47.878528  353293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa Username:docker}
	I1205 07:59:47.894003  353293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa Username:docker}
	I1205 07:59:48.090392  353293 ssh_runner.go:195] Run: systemctl --version
	I1205 07:59:48.096992  353293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:59:48.101629  353293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:59:48.101702  353293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:59:48.128581  353293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:59:48.128609  353293 start.go:496] detecting cgroup driver to use...
	I1205 07:59:48.128641  353293 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:59:48.128694  353293 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:59:48.144114  353293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:59:48.157759  353293 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:59:48.157871  353293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:59:48.177271  353293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:59:48.196152  353293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:59:48.308428  353293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:59:48.443908  353293 docker.go:234] disabling docker service ...
	I1205 07:59:48.443974  353293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:59:48.466886  353293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:59:48.480428  353293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:59:48.587115  353293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:59:48.719623  353293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:59:48.733241  353293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:59:48.749499  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:59:48.759358  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:59:48.768743  353293 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:59:48.768806  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:59:48.778178  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:59:48.787455  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:59:48.796013  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:59:48.804684  353293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:59:48.812875  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:59:48.821777  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:59:48.830640  353293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:59:48.839394  353293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:59:48.847051  353293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:59:48.854631  353293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:59:48.966515  353293 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:59:49.090142  353293 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:59:49.090266  353293 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:59:49.094411  353293 start.go:564] Will wait 60s for crictl version
	I1205 07:59:49.094557  353293 ssh_runner.go:195] Run: which crictl
	I1205 07:59:49.098033  353293 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:59:49.127287  353293 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:59:49.127403  353293 ssh_runner.go:195] Run: containerd --version
	I1205 07:59:49.147935  353293 ssh_runner.go:195] Run: containerd --version
	I1205 07:59:49.179432  353293 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.5 ...
	I1205 07:59:49.182431  353293 cli_runner.go:164] Run: docker network inspect flannel-183381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:59:49.198146  353293 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:59:49.202143  353293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:59:49.212170  353293 kubeadm.go:884] updating cluster {Name:flannel-183381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-183381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:59:49.212292  353293 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 07:59:49.212352  353293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:59:49.236899  353293 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:59:49.236921  353293 containerd.go:534] Images already preloaded, skipping extraction
	I1205 07:59:49.236985  353293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:59:49.260594  353293 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:59:49.260618  353293 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:59:49.260626  353293 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1205 07:59:49.260721  353293 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-183381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:flannel-183381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1205 07:59:49.260789  353293 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:59:49.284644  353293 cni.go:84] Creating CNI manager for "flannel"
	I1205 07:59:49.284672  353293 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:59:49.284695  353293 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-183381 NodeName:flannel-183381 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:59:49.284807  353293 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "flannel-183381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:59:49.284874  353293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:59:49.292982  353293 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:59:49.293053  353293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:59:49.300877  353293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1205 07:59:49.313811  353293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:59:49.326842  353293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1205 07:59:49.339766  353293 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:59:49.344799  353293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:59:49.354915  353293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:59:49.488258  353293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:59:49.505970  353293 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381 for IP: 192.168.85.2
	I1205 07:59:49.505994  353293 certs.go:195] generating shared ca certs ...
	I1205 07:59:49.506010  353293 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:59:49.506150  353293 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:59:49.506209  353293 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:59:49.506221  353293 certs.go:257] generating profile certs ...
	I1205 07:59:49.506278  353293 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/client.key
	I1205 07:59:49.506295  353293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/client.crt with IP's: []
	I1205 07:59:49.581853  353293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/client.crt ...
	I1205 07:59:49.581886  353293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/client.crt: {Name:mk67e8e3c8d6957be5f244be0219dd770cb7f5b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:59:49.582071  353293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/client.key ...
	I1205 07:59:49.582085  353293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/client.key: {Name:mk6ded83bb9d88f2cc066f30f8e6fbf1754fa5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:59:49.582172  353293 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.key.1a81915e
	I1205 07:59:49.582187  353293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.crt.1a81915e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:59:50.079783  353293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.crt.1a81915e ...
	I1205 07:59:50.079817  353293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.crt.1a81915e: {Name:mk15566ce9ee74c631d275bd26b25cfb5bd6d814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:59:50.080014  353293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.key.1a81915e ...
	I1205 07:59:50.080029  353293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.key.1a81915e: {Name:mkca12a347865336d9ce924b0ff7ecc7f20c26cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:59:50.080114  353293 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.crt.1a81915e -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.crt
	I1205 07:59:50.080194  353293 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.key.1a81915e -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.key
	I1205 07:59:50.080258  353293 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/proxy-client.key
	I1205 07:59:50.080277  353293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/proxy-client.crt with IP's: []
	I1205 07:59:50.221484  353293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/proxy-client.crt ...
	I1205 07:59:50.221516  353293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/proxy-client.crt: {Name:mka027ca73e29d06a057f2f1cb942f3357a82d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:59:50.221695  353293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/proxy-client.key ...
	I1205 07:59:50.221707  353293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/proxy-client.key: {Name:mk88bd93605bf0f3660fb86e2060b0c7eb736ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:59:50.221907  353293 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:59:50.221954  353293 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:59:50.221968  353293 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:59:50.221997  353293 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:59:50.222025  353293 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:59:50.222053  353293 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:59:50.222102  353293 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:59:50.222648  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:59:50.241550  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:59:50.260230  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:59:50.277938  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:59:50.295434  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 07:59:50.312930  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:59:50.331230  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:59:50.350599  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/flannel-183381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:59:50.369440  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:59:50.391038  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:59:50.414668  353293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:59:50.431805  353293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:59:50.444832  353293 ssh_runner.go:195] Run: openssl version
	I1205 07:59:50.451046  353293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:59:50.458363  353293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:59:50.465667  353293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:59:50.469850  353293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:59:50.469912  353293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:59:50.510673  353293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:59:50.518199  353293 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:59:50.525531  353293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:59:50.532647  353293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:59:50.539863  353293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:59:50.543427  353293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:59:50.543508  353293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:59:50.584303  353293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:59:50.591630  353293 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:59:50.598740  353293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:59:50.605916  353293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:59:50.613224  353293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:59:50.617068  353293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:59:50.617285  353293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:59:50.663562  353293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:59:50.670855  353293 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 07:59:50.678155  353293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:59:50.681887  353293 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:59:50.681984  353293 kubeadm.go:401] StartCluster: {Name:flannel-183381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-183381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:59:50.682073  353293 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:59:50.682136  353293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:59:50.706978  353293 cri.go:89] found id: ""
	I1205 07:59:50.707047  353293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:59:50.714872  353293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:59:50.722753  353293 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:59:50.722847  353293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:59:50.730849  353293 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:59:50.730870  353293 kubeadm.go:158] found existing configuration files:
	
	I1205 07:59:50.730934  353293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:59:50.738633  353293 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:59:50.738714  353293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:59:50.746313  353293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:59:50.754111  353293 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:59:50.754223  353293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:59:50.761732  353293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:59:50.769186  353293 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:59:50.769295  353293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:59:50.776516  353293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:59:50.784268  353293 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:59:50.784332  353293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:59:50.791634  353293 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:59:50.832009  353293 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 07:59:50.832227  353293 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:59:50.871645  353293 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:59:50.871800  353293 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:59:50.871857  353293 kubeadm.go:319] OS: Linux
	I1205 07:59:50.871934  353293 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:59:50.872009  353293 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:59:50.872089  353293 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:59:50.872187  353293 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:59:50.872261  353293 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:59:50.872341  353293 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:59:50.872417  353293 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:59:50.872497  353293 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:59:50.872569  353293 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:59:50.960664  353293 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:59:50.960789  353293 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:59:50.960886  353293 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:59:50.969797  353293 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:59:50.976535  353293 out.go:252]   - Generating certificates and keys ...
	I1205 07:59:50.976690  353293 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:59:50.976790  353293 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:59:52.402521  353293 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:59:52.861300  353293 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:59:53.058690  353293 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:59:54.167901  353293 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:59:54.440151  353293 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:59:54.440614  353293 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-183381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:59:54.858636  353293 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:59:54.859089  353293 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-183381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:59:55.029880  353293 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:59:55.298334  353293 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:59:55.458065  353293 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:59:55.458363  353293 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:59:55.752903  353293 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:59:55.862137  353293 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:59:56.248326  353293 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:59:56.556547  353293 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:59:57.271328  353293 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:59:57.272047  353293 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:59:57.275280  353293 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:59:57.278819  353293 out.go:252]   - Booting up control plane ...
	I1205 07:59:57.278928  353293 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:59:57.279006  353293 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:59:57.279878  353293 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:59:57.297351  353293 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:59:57.297663  353293 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:59:57.305949  353293 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:59:57.306258  353293 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:59:57.306480  353293 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:59:57.449350  353293 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:59:57.449469  353293 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:59:58.945532  353293 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501235524s
	I1205 07:59:58.947172  353293 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:59:58.947265  353293 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1205 07:59:58.947360  353293 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:59:58.947443  353293 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 08:00:03.859149  353293 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.911316305s
	I1205 08:00:04.681535  353293 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.734320152s
	I1205 08:00:06.448825  353293 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.501420339s
	I1205 08:00:06.481239  353293 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 08:00:06.498203  353293 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 08:00:06.514585  353293 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 08:00:06.514810  353293 kubeadm.go:319] [mark-control-plane] Marking the node flannel-183381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 08:00:06.527554  353293 kubeadm.go:319] [bootstrap-token] Using token: kc75mr.3st2o5c9gu3f4wg8
	I1205 08:00:06.530473  353293 out.go:252]   - Configuring RBAC rules ...
	I1205 08:00:06.530611  353293 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 08:00:06.535192  353293 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 08:00:06.543935  353293 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 08:00:06.550360  353293 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 08:00:06.556278  353293 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 08:00:06.560808  353293 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 08:00:06.858744  353293 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 08:00:07.279273  353293 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 08:00:07.856303  353293 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 08:00:07.857508  353293 kubeadm.go:319] 
	I1205 08:00:07.857579  353293 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 08:00:07.857584  353293 kubeadm.go:319] 
	I1205 08:00:07.857667  353293 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 08:00:07.857671  353293 kubeadm.go:319] 
	I1205 08:00:07.857696  353293 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 08:00:07.857755  353293 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 08:00:07.857805  353293 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 08:00:07.857809  353293 kubeadm.go:319] 
	I1205 08:00:07.857863  353293 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 08:00:07.857867  353293 kubeadm.go:319] 
	I1205 08:00:07.857914  353293 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 08:00:07.857918  353293 kubeadm.go:319] 
	I1205 08:00:07.857969  353293 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 08:00:07.858045  353293 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 08:00:07.858113  353293 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 08:00:07.858117  353293 kubeadm.go:319] 
	I1205 08:00:07.858202  353293 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 08:00:07.858278  353293 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 08:00:07.858282  353293 kubeadm.go:319] 
	I1205 08:00:07.858366  353293 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kc75mr.3st2o5c9gu3f4wg8 \
	I1205 08:00:07.858470  353293 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7da661c66adcdc7adc5fd75c1776d7f8fbeafbd1c6f82c89d86db02e1912959c \
	I1205 08:00:07.858490  353293 kubeadm.go:319] 	--control-plane 
	I1205 08:00:07.858494  353293 kubeadm.go:319] 
	I1205 08:00:07.858578  353293 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 08:00:07.858582  353293 kubeadm.go:319] 
	I1205 08:00:07.858664  353293 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kc75mr.3st2o5c9gu3f4wg8 \
	I1205 08:00:07.858766  353293 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7da661c66adcdc7adc5fd75c1776d7f8fbeafbd1c6f82c89d86db02e1912959c 
	I1205 08:00:07.862076  353293 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 08:00:07.862309  353293 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 08:00:07.862419  353293 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 08:00:07.862474  353293 cni.go:84] Creating CNI manager for "flannel"
	I1205 08:00:07.865400  353293 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1205 08:00:07.868229  353293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 08:00:07.872545  353293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 08:00:07.872565  353293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1205 08:00:07.891049  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 08:00:08.427406  353293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 08:00:08.427530  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:08.427620  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-183381 minikube.k8s.io/updated_at=2025_12_05T08_00_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=flannel-183381 minikube.k8s.io/primary=true
	I1205 08:00:08.479177  353293 ops.go:34] apiserver oom_adj: -16
	I1205 08:00:08.620448  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:09.120486  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:09.621285  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:10.121369  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:10.620484  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:11.120584  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:11.620710  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:12.121127  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:12.620684  353293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:00:12.776135  353293 kubeadm.go:1114] duration metric: took 4.348647783s to wait for elevateKubeSystemPrivileges
	I1205 08:00:12.776163  353293 kubeadm.go:403] duration metric: took 22.094183158s to StartCluster
	I1205 08:00:12.776180  353293 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:12.776242  353293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 08:00:12.777148  353293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:00:12.777432  353293 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 08:00:12.777632  353293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 08:00:12.777947  353293 config.go:182] Loaded profile config "flannel-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 08:00:12.778059  353293 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:00:12.778133  353293 addons.go:70] Setting storage-provisioner=true in profile "flannel-183381"
	I1205 08:00:12.778148  353293 addons.go:239] Setting addon storage-provisioner=true in "flannel-183381"
	I1205 08:00:12.778172  353293 host.go:66] Checking if "flannel-183381" exists ...
	I1205 08:00:12.778599  353293 addons.go:70] Setting default-storageclass=true in profile "flannel-183381"
	I1205 08:00:12.778622  353293 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-183381"
	I1205 08:00:12.778891  353293 cli_runner.go:164] Run: docker container inspect flannel-183381 --format={{.State.Status}}
	I1205 08:00:12.779223  353293 cli_runner.go:164] Run: docker container inspect flannel-183381 --format={{.State.Status}}
	I1205 08:00:12.780936  353293 out.go:179] * Verifying Kubernetes components...
	I1205 08:00:12.787048  353293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:00:12.815810  353293 addons.go:239] Setting addon default-storageclass=true in "flannel-183381"
	I1205 08:00:12.815850  353293 host.go:66] Checking if "flannel-183381" exists ...
	I1205 08:00:12.816287  353293 cli_runner.go:164] Run: docker container inspect flannel-183381 --format={{.State.Status}}
	I1205 08:00:12.823602  353293 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:00:12.826575  353293 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:00:12.826599  353293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:00:12.826665  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 08:00:12.847813  353293 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:00:12.847833  353293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:00:12.847893  353293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-183381
	I1205 08:00:12.879702  353293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa Username:docker}
	I1205 08:00:12.894574  353293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/flannel-183381/id_rsa Username:docker}
	I1205 08:00:13.085610  353293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 08:00:13.116026  353293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:00:13.143170  353293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:00:13.155599  353293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:00:13.549975  353293 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1205 08:00:13.550933  353293 node_ready.go:35] waiting up to 15m0s for node "flannel-183381" to be "Ready" ...
	I1205 08:00:13.956732  353293 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.467880478Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.467944766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468010408Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468070889Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468137597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468209844Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468274591Z" level=info msg="runtime interface created"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468329098Z" level=info msg="created NRI interface"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468386092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468477137Z" level=info msg="Connect containerd service"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468834006Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.469689958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.479649538Z" level=info msg="Start subscribing containerd event"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.479732066Z" level=info msg="Start recovering state"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.480037743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.480474506Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497682696Z" level=info msg="Start event monitor"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497734635Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497745409Z" level=info msg="Start streaming server"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497758537Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497768318Z" level=info msg="runtime interface starting up..."
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497774849Z" level=info msg="starting plugins..."
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497803961Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.498055853Z" level=info msg="containerd successfully booted in 0.055465s"
	Dec 05 07:45:10 no-preload-241270 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:00:18.082448    8066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:00:18.083279    8066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:00:18.085097    8066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:00:18.085779    8066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:00:18.087427    8066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 07:54] hrtimer: interrupt took 15630962 ns
	
	
	==> kernel <==
	 08:00:18 up  2:42,  0 user,  load average: 1.85, 1.70, 1.56
	Linux no-preload-241270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:00:15 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:00:15 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 05 08:00:15 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:00:15 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:00:15 no-preload-241270 kubelet[7933]: E1205 08:00:15.888674    7933 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:00:15 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:00:15 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:00:16 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1207.
	Dec 05 08:00:16 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:00:16 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:00:16 no-preload-241270 kubelet[7939]: E1205 08:00:16.641917    7939 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:00:16 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:00:16 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:00:17 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1208.
	Dec 05 08:00:17 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:00:17 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:00:17 no-preload-241270 kubelet[7973]: E1205 08:00:17.382705    7973 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:00:17 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:00:17 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:00:18 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1209.
	Dec 05 08:00:18 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:00:18 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:00:18 no-preload-241270 kubelet[8071]: E1205 08:00:18.161672    8071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:00:18 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:00:18 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 2 (389.839584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-622440 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (317.48671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-622440 -n newest-cni-622440
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (313.833601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-622440 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (322.435471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-622440 -n newest-cni-622440
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (318.416976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-622440
helpers_test.go:243: (dbg) docker inspect newest-cni-622440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	        "Created": "2025-12-05T07:34:55.965403434Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:45:25.584904359Z",
	            "FinishedAt": "2025-12-05T07:45:24.024543459Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hosts",
	        "LogPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4-json.log",
	        "Name": "/newest-cni-622440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-622440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-622440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	                "LowerDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-622440",
	                "Source": "/var/lib/docker/volumes/newest-cni-622440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-622440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-622440",
	                "name.minikube.sigs.k8s.io": "newest-cni-622440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed9530bf43b75054636d02a5c2e26f04f7734993d5bbcca1755d31d58cd478eb",
	            "SandboxKey": "/var/run/docker/netns/ed9530bf43b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-622440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:fd:48:71:b9:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "96c6294e00fc4b96dda84202da479b822dd69419748060a344f1800d21559cfe",
	                    "EndpointID": "58c3f199e7d48a7db52c99942eb204475e9d0d215b5c84cb3379d82aa57f00e6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-622440",
	                        "9420074472d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (358.308186ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-622440 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-622440 logs -n 25: (1.786557065s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ stop    │ -p no-preload-241270 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p no-preload-241270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	│ stop    │ -p newest-cni-622440 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-622440 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	│ image   │ newest-cni-622440 image list --format=json                                                                                                                                                                                                                 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:51 UTC │ 05 Dec 25 07:51 UTC │
	│ pause   │ -p newest-cni-622440 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:51 UTC │ 05 Dec 25 07:51 UTC │
	│ unpause │ -p newest-cni-622440 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:51 UTC │ 05 Dec 25 07:51 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:45:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:45:25.089760  299667 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:25.090022  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090052  299667 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:25.090069  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090384  299667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:45:25.090842  299667 out.go:368] Setting JSON to false
	I1205 07:45:25.091806  299667 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8872,"bootTime":1764911853,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:45:25.091916  299667 start.go:143] virtualization:  
	I1205 07:45:25.094988  299667 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:25.098817  299667 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:25.098909  299667 notify.go:221] Checking for updates...
	I1205 07:45:25.105041  299667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:25.108085  299667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:25.111075  299667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:45:25.114070  299667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:25.117093  299667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:25.120796  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:25.121387  299667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:25.146702  299667 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:25.146810  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.201970  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.192879595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.202086  299667 docker.go:319] overlay module found
	I1205 07:45:25.205420  299667 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:25.208200  299667 start.go:309] selected driver: docker
	I1205 07:45:25.208216  299667 start.go:927] validating driver "docker" against &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.208322  299667 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:25.209018  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.271889  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.262935561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.272253  299667 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:45:25.272290  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:25.272360  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:25.272408  299667 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.275549  299667 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:45:25.278335  299667 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:45:25.281398  299667 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:25.284371  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:25.284526  299667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:25.304420  299667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:25.304443  299667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:45:25.350688  299667 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:45:25.522612  299667 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:45:25.522872  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.522902  299667 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.522986  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:45:25.522997  299667 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.314µs
	I1205 07:45:25.523010  299667 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:45:25.523020  299667 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523050  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:45:25.523054  299667 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.177µs
	I1205 07:45:25.523060  299667 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523070  299667 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523108  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:45:25.523117  299667 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.906µs
	I1205 07:45:25.523123  299667 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523137  299667 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523144  299667 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:25.523164  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:45:25.523170  299667 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 37.867µs
	I1205 07:45:25.523176  299667 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523180  299667 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523184  299667 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523220  299667 start.go:364] duration metric: took 26.043µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:45:25.523232  299667 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:25.523223  299667 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523248  299667 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:45:25.523282  299667 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.595µs
	I1205 07:45:25.523288  299667 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:45:25.523289  299667 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523237  299667 fix.go:54] fixHost starting: 
	I1205 07:45:25.523319  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:45:25.523328  299667 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 144.182µs
	I1205 07:45:25.523335  299667 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523296  299667 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 85.228µs
	I1205 07:45:25.523346  299667 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:45:25.523368  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:45:25.523373  299667 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 85.498µs
	I1205 07:45:25.523378  299667 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:45:25.523390  299667 cache.go:87] Successfully saved all images to host disk.
	I1205 07:45:25.523585  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.542111  299667 fix.go:112] recreateIfNeeded on newest-cni-622440: state=Stopped err=<nil>
	W1205 07:45:25.542142  299667 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:45:26.103157  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:26.555898  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:26.616440  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:26.616472  297527 retry.go:31] will retry after 4.350402654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.227883  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:27.290238  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.290274  297527 retry.go:31] will retry after 4.46337589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:28.602428  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:25.545608  299667 out.go:252] * Restarting existing docker container for "newest-cni-622440" ...
	I1205 07:45:25.545717  299667 cli_runner.go:164] Run: docker start newest-cni-622440
	I1205 07:45:25.826053  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.856383  299667 kic.go:430] container "newest-cni-622440" state is running.
	I1205 07:45:25.856775  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:25.877321  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.877542  299667 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:25.878047  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:25.903226  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:25.903553  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:25.903561  299667 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:25.904107  299667 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35184->127.0.0.1:33103: read: connection reset by peer
	I1205 07:45:29.056730  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.056754  299667 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:45:29.056818  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.074923  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.075238  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.075256  299667 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:45:29.238817  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.238924  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.256394  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.256698  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.256720  299667 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:29.409360  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:29.409384  299667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:45:29.409403  299667 ubuntu.go:190] setting up certificates
	I1205 07:45:29.409412  299667 provision.go:84] configureAuth start
	I1205 07:45:29.409469  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:29.426522  299667 provision.go:143] copyHostCerts
	I1205 07:45:29.426598  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:45:29.426610  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:45:29.426695  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:45:29.426806  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:45:29.426817  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:45:29.426846  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:45:29.426910  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:45:29.426920  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:45:29.426946  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:45:29.427008  299667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:45:29.583992  299667 provision.go:177] copyRemoteCerts
	I1205 07:45:29.584079  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:29.584142  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.601241  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.705331  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:45:29.723929  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:45:29.741035  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:45:29.758654  299667 provision.go:87] duration metric: took 349.219709ms to configureAuth
	I1205 07:45:29.758682  299667 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:29.758882  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:29.758893  299667 machine.go:97] duration metric: took 3.881342431s to provisionDockerMachine
	I1205 07:45:29.758901  299667 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:45:29.758917  299667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:29.758966  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:29.759008  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.777016  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.881927  299667 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:29.889885  299667 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:29.889915  299667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:29.889927  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:45:29.889986  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:45:29.890075  299667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:45:29.890181  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:29.899716  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:29.920554  299667 start.go:296] duration metric: took 161.628343ms for postStartSetup
	I1205 07:45:29.920647  299667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:29.920717  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.938834  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.040045  299667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:30.045649  299667 fix.go:56] duration metric: took 4.522402293s for fixHost
	I1205 07:45:30.045683  299667 start.go:83] releasing machines lock for "newest-cni-622440", held for 4.522453444s
	I1205 07:45:30.045767  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:30.065623  299667 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:30.065678  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.065694  299667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:30.065761  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.087940  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.099183  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.281502  299667 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:30.288110  299667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:30.292481  299667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:30.292550  299667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:30.300562  299667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:30.300584  299667 start.go:496] detecting cgroup driver to use...
	I1205 07:45:30.300616  299667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:30.300666  299667 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:45:30.318364  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:45:30.332088  299667 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:30.332151  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:30.348258  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:30.361775  299667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:30.469361  299667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:30.577441  299667 docker.go:234] disabling docker service ...
	I1205 07:45:30.577508  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:30.592915  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:30.607578  299667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:30.752107  299667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:30.872747  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:30.888408  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:30.904134  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:45:30.914385  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:45:30.923315  299667 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:45:30.923423  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:45:30.932175  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.940943  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:45:30.949729  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.958228  299667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:30.965941  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:45:30.980042  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:45:30.995740  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:45:31.009747  299667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:31.019595  299667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:31.028525  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.153254  299667 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:45:31.252043  299667 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:45:31.252123  299667 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:45:31.255724  299667 start.go:564] Will wait 60s for crictl version
	I1205 07:45:31.255784  299667 ssh_runner.go:195] Run: which crictl
	I1205 07:45:31.259402  299667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:31.288033  299667 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:45:31.288102  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.310723  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.334839  299667 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:45:31.337671  299667 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:31.359874  299667 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:31.365663  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.387524  299667 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:45:31.390412  299667 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:31.390547  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:31.390648  299667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:31.429142  299667 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:45:31.429206  299667 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:31.429215  299667 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:45:31.429338  299667 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:31.429419  299667 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:45:31.463460  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:31.463487  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:31.463511  299667 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:45:31.463580  299667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:31.463714  299667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:31.463789  299667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:45:31.471606  299667 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:31.471702  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:31.480080  299667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:45:31.492950  299667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:45:31.505530  299667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:45:31.518323  299667 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:31.521961  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.531618  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.655593  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:31.673339  299667 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:45:31.673398  299667 certs.go:195] generating shared ca certs ...
	I1205 07:45:31.673427  299667 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:31.673592  299667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:45:31.673665  299667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:45:31.673695  299667 certs.go:257] generating profile certs ...
	I1205 07:45:31.673812  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:45:31.673907  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:45:31.673970  299667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:45:31.674103  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:45:31.674164  299667 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:31.674197  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:31.674246  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:45:31.674289  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:31.674341  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:31.674413  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:31.675038  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:31.699874  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:31.718981  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:31.739011  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:31.757897  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:45:31.776123  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:31.794286  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:31.815714  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:45:31.832875  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:31.851417  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:45:31.868401  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:45:31.885858  299667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:31.898468  299667 ssh_runner.go:195] Run: openssl version
	I1205 07:45:31.904594  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.911851  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:45:31.919124  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922684  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922758  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.963682  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:31.970739  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.977808  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:31.985046  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988699  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988790  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:32.029966  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:32.037736  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.045196  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:45:32.052663  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056573  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056689  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.097976  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:32.106452  299667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:32.110712  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:32.154012  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:32.194946  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:32.235499  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:32.276192  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:32.316778  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:32.357969  299667 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:32.358063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:32.358128  299667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:32.393923  299667 cri.go:89] found id: ""
	I1205 07:45:32.393993  299667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:32.401825  299667 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:32.401893  299667 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:32.401977  299667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:32.409190  299667 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:32.409869  299667 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.410186  299667 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-622440" cluster setting kubeconfig missing "newest-cni-622440" context setting]
	I1205 07:45:32.410754  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.412652  299667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:32.420082  299667 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1205 07:45:32.420112  299667 kubeadm.go:602] duration metric: took 18.200733ms to restartPrimaryControlPlane
	I1205 07:45:32.420122  299667 kubeadm.go:403] duration metric: took 62.162615ms to StartCluster
	I1205 07:45:32.420136  299667 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.420193  299667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.421089  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.421340  299667 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:45:32.421617  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:32.421690  299667 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:32.421796  299667 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-622440"
	I1205 07:45:32.421816  299667 addons.go:70] Setting default-storageclass=true in profile "newest-cni-622440"
	I1205 07:45:32.421860  299667 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-622440"
	I1205 07:45:32.421826  299667 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-622440"
	I1205 07:45:32.421949  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.422169  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.422375  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.421807  299667 addons.go:70] Setting dashboard=true in profile "newest-cni-622440"
	I1205 07:45:32.422859  299667 addons.go:239] Setting addon dashboard=true in "newest-cni-622440"
	W1205 07:45:32.422869  299667 addons.go:248] addon dashboard should already be in state true
	I1205 07:45:32.422895  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.423306  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.425911  299667 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:32.429270  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:32.459552  299667 addons.go:239] Setting addon default-storageclass=true in "newest-cni-622440"
	I1205 07:45:32.459590  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.459994  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.466676  299667 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:45:32.469573  299667 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:45:32.469693  299667 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.469710  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:45:32.469779  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.479022  299667 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:45:30.602600  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:30.967025  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:31.052948  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.052985  297527 retry.go:31] will retry after 7.944795354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.285879  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:31.386500  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.386531  297527 retry.go:31] will retry after 6.357223814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.754709  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:31.845913  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.845950  297527 retry.go:31] will retry after 12.860014736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.103254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:32.484603  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:45:32.484629  299667 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:45:32.484694  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.517396  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.529599  299667 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.529620  299667 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:45:32.529685  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.549325  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.574838  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.643911  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:32.670090  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.687313  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:45:32.687343  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:45:32.721498  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:45:32.721518  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:45:32.728026  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.759870  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:45:32.759892  299667 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:45:32.773100  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:45:32.773119  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:45:32.790813  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:45:32.790887  299667 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:45:32.806943  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:45:32.807008  299667 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:45:32.827525  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:45:32.827547  299667 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:45:32.840144  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:45:32.840166  299667 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:45:32.856122  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:32.856196  299667 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:45:32.869771  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:33.097468  299667 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:45:33.097593  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:33.097728  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097794  299667 retry.go:31] will retry after 241.658936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.097872  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097907  299667 retry.go:31] will retry after 176.603947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.098118  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.098157  299667 retry.go:31] will retry after 229.408257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.275635  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:33.328106  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.333654  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.333699  299667 retry.go:31] will retry after 493.072495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.339842  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:33.420976  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421140  299667 retry.go:31] will retry after 232.443098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.421103  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421275  299667 retry.go:31] will retry after 218.243264ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.598377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:33.640183  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:33.654611  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.714507  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.714586  299667 retry.go:31] will retry after 296.021108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.735889  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.735929  299667 retry.go:31] will retry after 647.569018ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.827334  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:33.912321  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.912410  299667 retry.go:31] will retry after 511.925432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.011792  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:34.070223  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.070270  299667 retry.go:31] will retry after 1.045041767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.098366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:34.384609  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:34.425097  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:34.456662  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.456771  299667 retry.go:31] will retry after 1.012360732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:34.490780  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.490815  299667 retry.go:31] will retry after 673.94662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.598028  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:35.602346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:37.602757  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:37.744028  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.809224  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.809268  297527 retry.go:31] will retry after 8.525278844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.998921  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:39.069453  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.069501  297527 retry.go:31] will retry after 21.498999078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.097803  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:35.115652  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:35.165241  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:35.189445  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.189528  299667 retry.go:31] will retry after 873.335351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:35.234071  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.234107  299667 retry.go:31] will retry after 1.250813401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.469343  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:35.535355  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.535386  299667 retry.go:31] will retry after 1.457971594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.598793  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.063166  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:36.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:36.141912  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.141992  299667 retry.go:31] will retry after 1.289648417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.485696  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:36.544841  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.544879  299667 retry.go:31] will retry after 2.662984572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.598226  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.993607  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.063691  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.063774  299667 retry.go:31] will retry after 1.151172803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.098032  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:37.431865  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:37.492142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.492177  299667 retry.go:31] will retry after 3.504601193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.598357  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.098363  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.215346  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:38.274274  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.274309  299667 retry.go:31] will retry after 1.757329115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.597749  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.097719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.208847  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:39.266142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.266182  299667 retry.go:31] will retry after 3.436463849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.598395  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.031973  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:40.102833  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:42.602360  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:44.706625  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:40.092374  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.092409  299667 retry.go:31] will retry after 2.182976597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.098469  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.598422  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.997583  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:41.059423  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.059455  299667 retry.go:31] will retry after 3.560419221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.098613  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:41.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.098453  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.276211  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:42.351488  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.351524  299667 retry.go:31] will retry after 9.602308898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.598167  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.703420  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:42.760290  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.760322  299667 retry.go:31] will retry after 5.381602643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:43.097810  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:43.597706  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.098335  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.597780  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.620405  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:44.677458  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.677489  299667 retry.go:31] will retry after 4.279612118s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:44.764830  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.764865  297527 retry.go:31] will retry after 17.369945393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:45.102956  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:46.334817  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:46.418483  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:46.418521  297527 retry.go:31] will retry after 23.303020683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:47.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:49.602799  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:45.098273  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:45.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.597868  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.097740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.597768  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.097748  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.142199  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:48.202751  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.202784  299667 retry.go:31] will retry after 9.130347643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.958075  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:49.020580  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.020664  299667 retry.go:31] will retry after 5.816091686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:49.597778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:52.102357  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:54.603289  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:50.097903  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:50.598277  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.098323  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.598320  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.954438  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:52.018482  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.018522  299667 retry.go:31] will retry after 11.887626777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.098608  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:52.598374  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.098377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.098330  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.597906  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.837992  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:54.928421  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:54.928451  299667 retry.go:31] will retry after 21.232814528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:57.103152  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:59.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:55.097998  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:55.598566  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.098233  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.598487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.333368  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:57.391373  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.391409  299667 retry.go:31] will retry after 6.534046571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.598447  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.098487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.597673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.098584  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.597752  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.568740  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:00.647111  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:00.647143  297527 retry.go:31] will retry after 19.124891194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:01.602386  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:02.135738  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:02.196508  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:02.196541  297527 retry.go:31] will retry after 23.234297555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:00.111473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.597738  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.097860  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.597786  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.598349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.097778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.906517  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:03.926085  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:03.977088  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:03.977126  299667 retry.go:31] will retry after 8.615984736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.014857  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.014953  299667 retry.go:31] will retry after 11.096851447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.098074  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:04.598727  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:06.103226  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:08.602282  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:09.722604  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:05.098302  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:05.598378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.098313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.098365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.597739  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.597740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.098581  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.598396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:09.788810  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:09.788894  297527 retry.go:31] will retry after 37.030083188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:10.602342  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:13.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:10.098145  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:10.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.097819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.598431  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.098421  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.593706  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:12.598498  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:12.687257  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:12.687290  299667 retry.go:31] will retry after 19.919210015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:13.098633  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:13.598345  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.097716  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.598398  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:15.602302  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:17.603239  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:15.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:15.112618  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:15.170666  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.170700  299667 retry.go:31] will retry after 26.586504873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.598228  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.161584  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:16.224162  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.224193  299667 retry.go:31] will retry after 29.423350117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.597722  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.097721  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.597743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.098656  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.598271  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.098404  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.598719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.772903  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:19.832639  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:19.832668  297527 retry.go:31] will retry after 32.800355392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:20.103191  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:22.602639  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:24.603138  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:20.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:20.597725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.097770  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.598319  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.097718  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.098368  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.598400  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.431569  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:25.488990  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:25.489023  297527 retry.go:31] will retry after 28.819883279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:27.102333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:29.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:25.098708  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.597766  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.098393  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.598238  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.098573  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.598365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.598524  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.097726  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.598366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:31.103394  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:33.602924  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:30.098021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:30.598337  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.098378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.097725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.597622  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:32.597702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:32.607176  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:32.654366  299667 cri.go:89] found id: ""
	I1205 07:46:32.654387  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.654395  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:32.654402  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:32.654460  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:46:32.707430  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707464  299667 retry.go:31] will retry after 35.686554771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707503  299667 cri.go:89] found id: ""
	I1205 07:46:32.707512  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.707519  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:32.707525  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:32.707583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:32.732319  299667 cri.go:89] found id: ""
	I1205 07:46:32.732341  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.732350  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:32.732356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:32.732414  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:32.756204  299667 cri.go:89] found id: ""
	I1205 07:46:32.756226  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.756235  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:32.756241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:32.756313  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:32.785401  299667 cri.go:89] found id: ""
	I1205 07:46:32.785423  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.785431  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:32.785437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:32.785493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:32.811348  299667 cri.go:89] found id: ""
	I1205 07:46:32.811373  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.811381  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:32.811388  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:32.811461  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:32.835578  299667 cri.go:89] found id: ""
	I1205 07:46:32.835603  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.835612  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:32.835618  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:32.835679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:32.861749  299667 cri.go:89] found id: ""
	I1205 07:46:32.861773  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.861781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:32.861790  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:32.861801  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:32.937533  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:32.937555  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:32.937568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:32.962127  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:32.962161  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:32.989223  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:32.989256  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:33.046092  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:33.046128  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:46:36.102426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:38.602828  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:35.559882  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:35.570602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:35.570679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:35.597322  299667 cri.go:89] found id: ""
	I1205 07:46:35.597348  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.597358  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:35.597364  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:35.597420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:35.631556  299667 cri.go:89] found id: ""
	I1205 07:46:35.631585  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.631594  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:35.631605  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:35.631670  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:35.666766  299667 cri.go:89] found id: ""
	I1205 07:46:35.666790  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.666808  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:35.666851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:35.666928  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:35.696469  299667 cri.go:89] found id: ""
	I1205 07:46:35.696494  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.696503  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:35.696510  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:35.696570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:35.721564  299667 cri.go:89] found id: ""
	I1205 07:46:35.721587  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.721613  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:35.721620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:35.721679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:35.750450  299667 cri.go:89] found id: ""
	I1205 07:46:35.750474  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.750483  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:35.750490  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:35.750577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:35.779075  299667 cri.go:89] found id: ""
	I1205 07:46:35.779097  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.779105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:35.779111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:35.779171  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:35.804778  299667 cri.go:89] found id: ""
	I1205 07:46:35.804849  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.804870  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:35.804891  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:35.804928  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:35.818664  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:35.818691  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:35.896985  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:35.897010  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:35.897023  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:35.922964  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:35.922997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:35.950985  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:35.951012  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.510773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:38.521214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:38.521283  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:38.547037  299667 cri.go:89] found id: ""
	I1205 07:46:38.547061  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.547069  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:38.547088  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:38.547152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:38.571870  299667 cri.go:89] found id: ""
	I1205 07:46:38.571894  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.571903  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:38.571909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:38.571967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:38.597667  299667 cri.go:89] found id: ""
	I1205 07:46:38.597693  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.597701  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:38.597707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:38.597781  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:38.634302  299667 cri.go:89] found id: ""
	I1205 07:46:38.634328  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.634336  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:38.634343  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:38.634411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:38.662787  299667 cri.go:89] found id: ""
	I1205 07:46:38.662813  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.662822  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:38.662829  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:38.662886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:38.688000  299667 cri.go:89] found id: ""
	I1205 07:46:38.688026  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.688034  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:38.688040  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:38.688108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:38.712589  299667 cri.go:89] found id: ""
	I1205 07:46:38.712611  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.712619  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:38.712631  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:38.712688  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:38.736469  299667 cri.go:89] found id: ""
	I1205 07:46:38.736490  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.736499  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:38.736507  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:38.736521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:38.763556  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:38.763586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.818344  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:38.818379  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:38.832020  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:38.832054  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:38.931143  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:38.931164  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:38.931178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:40.603153  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:43.102740  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:41.457376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:41.468655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:41.468729  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:41.496317  299667 cri.go:89] found id: ""
	I1205 07:46:41.496391  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.496415  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:41.496434  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:41.496520  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:41.522205  299667 cri.go:89] found id: ""
	I1205 07:46:41.522230  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.522238  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:41.522244  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:41.522304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:41.547643  299667 cri.go:89] found id: ""
	I1205 07:46:41.547668  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.547677  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:41.547684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:41.547743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:41.576000  299667 cri.go:89] found id: ""
	I1205 07:46:41.576024  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.576032  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:41.576039  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:41.576093  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:41.610347  299667 cri.go:89] found id: ""
	I1205 07:46:41.610373  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.610393  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:41.610399  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:41.610455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:41.641947  299667 cri.go:89] found id: ""
	I1205 07:46:41.641974  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.641983  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:41.641990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:41.642049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:41.680331  299667 cri.go:89] found id: ""
	I1205 07:46:41.680355  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.680363  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:41.680370  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:41.680426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:41.707279  299667 cri.go:89] found id: ""
	I1205 07:46:41.707301  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.707310  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:41.707319  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:41.707331  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:41.720629  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:41.720654  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 07:46:41.757919  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:41.789558  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:41.789582  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:41.789596  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:41.829441  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.829475  299667 retry.go:31] will retry after 23.380573162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.840285  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:41.840316  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:41.875962  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:41.875990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.439978  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:44.450947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:44.451025  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:44.476311  299667 cri.go:89] found id: ""
	I1205 07:46:44.476335  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.476344  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:44.476350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:44.476420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:44.501030  299667 cri.go:89] found id: ""
	I1205 07:46:44.501064  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.501073  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:44.501078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:44.501138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:44.525674  299667 cri.go:89] found id: ""
	I1205 07:46:44.525697  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.525705  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:44.525711  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:44.525769  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:44.554878  299667 cri.go:89] found id: ""
	I1205 07:46:44.554903  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.554911  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:44.554918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:44.554991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:44.579773  299667 cri.go:89] found id: ""
	I1205 07:46:44.579796  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.579805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:44.579811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:44.579867  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:44.611991  299667 cri.go:89] found id: ""
	I1205 07:46:44.612017  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.612042  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:44.612049  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:44.612108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:44.646395  299667 cri.go:89] found id: ""
	I1205 07:46:44.646418  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.646427  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:44.646433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:44.646499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:44.674148  299667 cri.go:89] found id: ""
	I1205 07:46:44.674170  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.674178  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:44.674187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:44.674199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.734427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:44.734469  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:44.748531  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:44.748561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:44.815565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:44.815586  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:44.815601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:44.841456  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:44.841492  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:46:45.103537  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:46.819177  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:46:46.909187  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:46.909286  297527 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:47.602297  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:49.602426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:45.648666  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:45.706769  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:45.706803  299667 retry.go:31] will retry after 32.901994647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:47.381509  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:47.392949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:47.393065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:47.424033  299667 cri.go:89] found id: ""
	I1205 07:46:47.424057  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.424066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:47.424072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:47.424140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:47.451239  299667 cri.go:89] found id: ""
	I1205 07:46:47.451265  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.451275  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:47.451282  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:47.451342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:47.475229  299667 cri.go:89] found id: ""
	I1205 07:46:47.475250  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.475259  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:47.475265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:47.475322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:47.500010  299667 cri.go:89] found id: ""
	I1205 07:46:47.500036  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.500045  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:47.500051  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:47.500110  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:47.525665  299667 cri.go:89] found id: ""
	I1205 07:46:47.525691  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.525700  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:47.525707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:47.525767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:47.550876  299667 cri.go:89] found id: ""
	I1205 07:46:47.550902  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.550911  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:47.550917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:47.550978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:47.574838  299667 cri.go:89] found id: ""
	I1205 07:46:47.574904  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.574926  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:47.574940  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:47.575018  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:47.606672  299667 cri.go:89] found id: ""
	I1205 07:46:47.606698  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.606707  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:47.606716  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:47.606728  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:47.644360  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:47.644388  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:47.706982  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:47.707019  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:47.720731  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:47.720759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:47.782357  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:47.782378  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:47.782393  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:51.603232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:52.633653  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:52.692000  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:52.692106  297527 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:54.102683  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:54.310076  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:54.372261  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:54.372370  297527 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:46:54.375412  297527 out.go:179] * Enabled addons: 
	I1205 07:46:54.378282  297527 addons.go:530] duration metric: took 1m42.739564939s for enable addons: enabled=[]
	I1205 07:46:50.307630  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:50.318086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:50.318159  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:50.342816  299667 cri.go:89] found id: ""
	I1205 07:46:50.342838  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.342847  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:50.342853  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:50.342921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:50.371375  299667 cri.go:89] found id: ""
	I1205 07:46:50.371440  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.371462  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:50.371478  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:50.371566  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:50.401098  299667 cri.go:89] found id: ""
	I1205 07:46:50.401206  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.401224  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:50.401245  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:50.401310  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:50.432101  299667 cri.go:89] found id: ""
	I1205 07:46:50.432134  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.432143  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:50.432149  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:50.432262  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:50.457371  299667 cri.go:89] found id: ""
	I1205 07:46:50.457396  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.457405  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:50.457413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:50.457469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:50.486796  299667 cri.go:89] found id: ""
	I1205 07:46:50.486821  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.486830  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:50.486836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:50.486945  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:50.515505  299667 cri.go:89] found id: ""
	I1205 07:46:50.515529  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.515537  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:50.515544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:50.515606  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:50.543462  299667 cri.go:89] found id: ""
	I1205 07:46:50.543486  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.543495  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:50.543503  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:50.543561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:50.600091  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:50.600276  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:50.619872  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:50.619944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:50.690141  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:50.690160  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:50.690173  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:50.715362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:50.715398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:53.244467  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:53.256174  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:53.256240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:53.279782  299667 cri.go:89] found id: ""
	I1205 07:46:53.279803  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.279810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:53.279817  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:53.279878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:53.303793  299667 cri.go:89] found id: ""
	I1205 07:46:53.303813  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.303821  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:53.303827  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:53.303884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:53.332886  299667 cri.go:89] found id: ""
	I1205 07:46:53.332908  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.332916  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:53.332922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:53.332981  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:53.359130  299667 cri.go:89] found id: ""
	I1205 07:46:53.359153  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.359161  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:53.359168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:53.359229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:53.384922  299667 cri.go:89] found id: ""
	I1205 07:46:53.384947  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.384966  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:53.384972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:53.385033  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:53.409882  299667 cri.go:89] found id: ""
	I1205 07:46:53.409903  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.409912  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:53.409918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:53.409982  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:53.435229  299667 cri.go:89] found id: ""
	I1205 07:46:53.435254  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.435263  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:53.435269  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:53.435326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:53.460378  299667 cri.go:89] found id: ""
	I1205 07:46:53.460402  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.460411  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:53.460419  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:53.460430  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:53.515653  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:53.515686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:53.529252  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:53.529277  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:53.590407  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:53.590427  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:53.590439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:53.615638  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:53.615670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:46:56.102997  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:58.602448  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:56.149491  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:56.160491  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:56.160560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:56.186032  299667 cri.go:89] found id: ""
	I1205 07:46:56.186055  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.186063  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:56.186069  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:56.186127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:56.210655  299667 cri.go:89] found id: ""
	I1205 07:46:56.210683  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.210691  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:56.210698  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:56.210760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:56.236968  299667 cri.go:89] found id: ""
	I1205 07:46:56.237039  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.237060  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:56.237078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:56.237197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:56.261470  299667 cri.go:89] found id: ""
	I1205 07:46:56.261543  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.261559  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:56.261567  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:56.261626  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:56.287544  299667 cri.go:89] found id: ""
	I1205 07:46:56.287569  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.287578  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:56.287586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:56.287664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:56.313083  299667 cri.go:89] found id: ""
	I1205 07:46:56.313154  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.313200  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:56.313222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:56.313290  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:56.338841  299667 cri.go:89] found id: ""
	I1205 07:46:56.338865  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.338879  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:56.338886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:56.338971  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:56.364821  299667 cri.go:89] found id: ""
	I1205 07:46:56.364883  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.364906  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:56.364927  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:56.364953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:56.421380  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:56.421412  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:56.434797  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:56.434825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:56.500557  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:56.500579  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:56.500592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:56.525423  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:56.525453  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.059925  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:59.070350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:59.070417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:59.106211  299667 cri.go:89] found id: ""
	I1205 07:46:59.106234  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.106242  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:59.106250  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:59.106308  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:59.134075  299667 cri.go:89] found id: ""
	I1205 07:46:59.134101  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.134110  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:59.134116  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:59.134173  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:59.163091  299667 cri.go:89] found id: ""
	I1205 07:46:59.163119  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.163128  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:59.163134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:59.163195  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:59.189283  299667 cri.go:89] found id: ""
	I1205 07:46:59.189308  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.189316  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:59.189323  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:59.189384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:59.214391  299667 cri.go:89] found id: ""
	I1205 07:46:59.214416  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.214433  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:59.214439  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:59.214498  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:59.246223  299667 cri.go:89] found id: ""
	I1205 07:46:59.246246  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.246255  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:59.246262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:59.246321  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:59.274955  299667 cri.go:89] found id: ""
	I1205 07:46:59.274991  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.274999  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:59.275006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:59.275074  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:59.302932  299667 cri.go:89] found id: ""
	I1205 07:46:59.302956  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.302965  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:59.302984  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:59.302997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:59.362548  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:59.362571  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:59.362583  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:59.387053  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:59.387085  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.413739  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:59.413767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:59.469532  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:59.469569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:00.602658  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:03.102385  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:01.983455  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:01.994190  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:01.994316  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:02.023883  299667 cri.go:89] found id: ""
	I1205 07:47:02.023913  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.023922  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:02.023929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:02.023992  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:02.050293  299667 cri.go:89] found id: ""
	I1205 07:47:02.050367  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.050383  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:02.050390  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:02.050458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:02.076131  299667 cri.go:89] found id: ""
	I1205 07:47:02.076157  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.076166  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:02.076172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:02.076235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:02.115590  299667 cri.go:89] found id: ""
	I1205 07:47:02.115623  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.115632  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:02.115638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:02.115733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:02.155255  299667 cri.go:89] found id: ""
	I1205 07:47:02.155281  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.155290  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:02.155297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:02.155355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:02.184142  299667 cri.go:89] found id: ""
	I1205 07:47:02.184169  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.184178  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:02.184185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:02.184244  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:02.208969  299667 cri.go:89] found id: ""
	I1205 07:47:02.208997  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.209006  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:02.209036  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:02.209126  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:02.233523  299667 cri.go:89] found id: ""
	I1205 07:47:02.233556  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.233565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:02.233597  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:02.233609  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:02.289818  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:02.289852  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:02.303686  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:02.303756  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:02.370663  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:02.370711  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:02.370723  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:02.395466  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:02.395508  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:04.925546  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:04.937771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:04.937866  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:04.967009  299667 cri.go:89] found id: ""
	I1205 07:47:04.967031  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.967039  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:04.967046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:04.967103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:04.998327  299667 cri.go:89] found id: ""
	I1205 07:47:04.998351  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.998360  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:04.998365  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:04.998426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:05.026478  299667 cri.go:89] found id: ""
	I1205 07:47:05.026505  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.026513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:05.026521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:05.026583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:05.051556  299667 cri.go:89] found id: ""
	I1205 07:47:05.051580  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.051588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:05.051595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:05.051658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:05.078546  299667 cri.go:89] found id: ""
	I1205 07:47:05.078570  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.078579  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:05.078585  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:05.078649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1205 07:47:05.102744  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:07.602359  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:09.603452  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:05.107928  299667 cri.go:89] found id: ""
	I1205 07:47:05.107955  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.107964  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:05.107971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:05.108035  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:05.134695  299667 cri.go:89] found id: ""
	I1205 07:47:05.134718  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.134727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:05.134733  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:05.134792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:05.160991  299667 cri.go:89] found id: ""
	I1205 07:47:05.161017  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.161025  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:05.161035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:05.161048  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:05.211053  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:47:05.219354  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:05.219426  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:05.274067  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:05.274165  299667 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:05.274831  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:05.274851  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:05.336443  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:05.336473  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:05.336486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:05.361343  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:05.361374  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:07.887800  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:07.899185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:07.899259  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:07.927401  299667 cri.go:89] found id: ""
	I1205 07:47:07.927423  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.927431  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:07.927437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:07.927511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:07.958986  299667 cri.go:89] found id: ""
	I1205 07:47:07.959008  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.959017  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:07.959023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:07.959081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:07.986953  299667 cri.go:89] found id: ""
	I1205 07:47:07.986974  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.986983  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:07.986989  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:07.987052  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:08.013548  299667 cri.go:89] found id: ""
	I1205 07:47:08.013573  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.013581  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:08.013590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:08.013654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:08.039626  299667 cri.go:89] found id: ""
	I1205 07:47:08.039650  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.039658  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:08.039664  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:08.039724  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:08.064448  299667 cri.go:89] found id: ""
	I1205 07:47:08.064472  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.064482  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:08.064489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:08.064548  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:08.089144  299667 cri.go:89] found id: ""
	I1205 07:47:08.089234  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.089250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:08.089257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:08.089325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:08.124837  299667 cri.go:89] found id: ""
	I1205 07:47:08.124863  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.124890  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:08.124900  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:08.124917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:08.155028  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:08.155055  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:08.215310  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:08.215346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:08.229549  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:08.229577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:08.292266  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:08.292296  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:08.292309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:08.394608  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:47:08.457975  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:08.458074  299667 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:47:12.102433  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:14.102787  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:10.816831  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:10.827471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:10.827537  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:10.856590  299667 cri.go:89] found id: ""
	I1205 07:47:10.856612  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.856621  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:10.856626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:10.856687  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:10.887186  299667 cri.go:89] found id: ""
	I1205 07:47:10.887207  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.887215  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:10.887221  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:10.887279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:10.914460  299667 cri.go:89] found id: ""
	I1205 07:47:10.914482  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.914490  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:10.914497  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:10.914554  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:10.943070  299667 cri.go:89] found id: ""
	I1205 07:47:10.943095  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.943103  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:10.943109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:10.943167  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:10.967007  299667 cri.go:89] found id: ""
	I1205 07:47:10.967034  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.967043  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:10.967050  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:10.967142  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:10.990367  299667 cri.go:89] found id: ""
	I1205 07:47:10.990394  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.990402  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:10.990408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:10.990465  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:11.021515  299667 cri.go:89] found id: ""
	I1205 07:47:11.021538  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.021547  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:11.021553  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:11.021616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:11.046137  299667 cri.go:89] found id: ""
	I1205 07:47:11.046159  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.046168  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:11.046176  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:11.046190  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:11.071756  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:11.071787  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:11.101757  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:11.101784  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:11.175924  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:11.175962  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:11.190392  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:11.190424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:11.252655  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:13.753819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:13.764287  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:13.764373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:13.790393  299667 cri.go:89] found id: ""
	I1205 07:47:13.790418  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.790426  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:13.790433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:13.790496  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:13.814911  299667 cri.go:89] found id: ""
	I1205 07:47:13.814935  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.814944  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:13.814951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:13.815007  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:13.839756  299667 cri.go:89] found id: ""
	I1205 07:47:13.839779  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.839787  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:13.839794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:13.839852  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:13.870908  299667 cri.go:89] found id: ""
	I1205 07:47:13.870933  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.870943  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:13.870949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:13.871010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:13.902182  299667 cri.go:89] found id: ""
	I1205 07:47:13.902208  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.902216  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:13.902223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:13.902281  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:13.928077  299667 cri.go:89] found id: ""
	I1205 07:47:13.928102  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.928111  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:13.928117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:13.928174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:13.952673  299667 cri.go:89] found id: ""
	I1205 07:47:13.952706  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.952715  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:13.952721  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:13.952786  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:13.982104  299667 cri.go:89] found id: ""
	I1205 07:47:13.982137  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.982147  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:13.982156  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:13.982168  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:14.047894  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:14.047925  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:14.061830  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:14.061861  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:14.145569  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:14.145587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:14.145601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:14.173369  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:14.173406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:16.701890  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:16.712471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:16.712541  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:16.737364  299667 cri.go:89] found id: ""
	I1205 07:47:16.737386  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.737394  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:16.737400  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:16.737458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:16.761826  299667 cri.go:89] found id: ""
	I1205 07:47:16.761849  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.761858  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:16.761864  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:16.761921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:16.787321  299667 cri.go:89] found id: ""
	I1205 07:47:16.787343  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.787352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:16.787359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:16.787419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:16.812059  299667 cri.go:89] found id: ""
	I1205 07:47:16.812080  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.812087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:16.812094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:16.812152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:16.835710  299667 cri.go:89] found id: ""
	I1205 07:47:16.835731  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.835739  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:16.835745  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:16.835804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:16.866817  299667 cri.go:89] found id: ""
	I1205 07:47:16.866839  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.866848  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:16.866854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:16.866915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:16.892855  299667 cri.go:89] found id: ""
	I1205 07:47:16.892877  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.892885  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:16.892891  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:16.892948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:16.921328  299667 cri.go:89] found id: ""
	I1205 07:47:16.921348  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.921356  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:16.921365  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:16.921378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:16.975810  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:16.975843  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:16.989559  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:16.989589  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:17.052011  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:17.052031  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:17.052044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:17.076823  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:17.076853  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:18.609402  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:47:18.686960  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:18.687059  299667 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:18.690290  299667 out.go:179] * Enabled addons: 
	W1205 07:47:16.602616  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:19.102330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:18.693172  299667 addons.go:530] duration metric: took 1m46.271465904s for enable addons: enabled=[]
	I1205 07:47:19.612423  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:19.623124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:19.623194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:19.651237  299667 cri.go:89] found id: ""
	I1205 07:47:19.651260  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.651268  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:19.651276  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:19.651338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:19.679760  299667 cri.go:89] found id: ""
	I1205 07:47:19.679781  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.679790  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:19.679795  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:19.679854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:19.703620  299667 cri.go:89] found id: ""
	I1205 07:47:19.703640  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.703652  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:19.703658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:19.703731  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:19.727543  299667 cri.go:89] found id: ""
	I1205 07:47:19.727607  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.727629  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:19.727645  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:19.727736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:19.751580  299667 cri.go:89] found id: ""
	I1205 07:47:19.751606  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.751614  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:19.751620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:19.751678  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:19.778033  299667 cri.go:89] found id: ""
	I1205 07:47:19.778058  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.778066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:19.778074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:19.778130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:19.805321  299667 cri.go:89] found id: ""
	I1205 07:47:19.805346  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.805354  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:19.805360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:19.805419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:19.828911  299667 cri.go:89] found id: ""
	I1205 07:47:19.828932  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.828940  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:19.828949  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:19.828961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:19.842046  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:19.842072  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:19.924477  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:19.924542  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:19.924568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:19.949241  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:19.949279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:19.977260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:19.977287  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:47:21.102389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:23.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:22.534572  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:22.545193  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:22.545272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:22.570057  299667 cri.go:89] found id: ""
	I1205 07:47:22.570083  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.570092  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:22.570098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:22.570163  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:22.595296  299667 cri.go:89] found id: ""
	I1205 07:47:22.595321  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.595330  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:22.595337  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:22.595421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:22.620283  299667 cri.go:89] found id: ""
	I1205 07:47:22.620307  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.620315  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:22.620322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:22.620399  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:22.644353  299667 cri.go:89] found id: ""
	I1205 07:47:22.644379  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.644389  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:22.644395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:22.644474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:22.674856  299667 cri.go:89] found id: ""
	I1205 07:47:22.674885  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.674894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:22.674900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:22.674980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:22.699975  299667 cri.go:89] found id: ""
	I1205 07:47:22.700002  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.700011  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:22.700018  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:22.700089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:22.725706  299667 cri.go:89] found id: ""
	I1205 07:47:22.725734  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.725743  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:22.725753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:22.725822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:22.750409  299667 cri.go:89] found id: ""
	I1205 07:47:22.750430  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.750439  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:22.750459  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:22.750471  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:22.775719  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:22.775754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:22.806148  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:22.806175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:22.863750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:22.863786  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:22.878145  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:22.878174  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:22.945284  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:47:25.602789  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:28.102396  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:25.446099  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:25.457267  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:25.457345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:25.484246  299667 cri.go:89] found id: ""
	I1205 07:47:25.484273  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.484282  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:25.484289  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:25.484346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:25.513783  299667 cri.go:89] found id: ""
	I1205 07:47:25.513806  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.513815  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:25.513821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:25.513895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:25.542603  299667 cri.go:89] found id: ""
	I1205 07:47:25.542627  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.542636  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:25.542642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:25.542768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:25.566393  299667 cri.go:89] found id: ""
	I1205 07:47:25.566417  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.566427  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:25.566433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:25.566510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:25.591113  299667 cri.go:89] found id: ""
	I1205 07:47:25.591148  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.591157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:25.591164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:25.591237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:25.619895  299667 cri.go:89] found id: ""
	I1205 07:47:25.619919  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.619928  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:25.619935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:25.619991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:25.645287  299667 cri.go:89] found id: ""
	I1205 07:47:25.645311  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.645319  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:25.645326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:25.645386  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:25.670944  299667 cri.go:89] found id: ""
	I1205 07:47:25.670967  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.670975  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:25.671025  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:25.671043  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:25.728687  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:25.728721  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:25.743347  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:25.743373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:25.808046  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:25.808069  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:25.808082  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:25.833265  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:25.833298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:28.366360  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:28.378460  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:28.378539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:28.413651  299667 cri.go:89] found id: ""
	I1205 07:47:28.413678  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.413687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:28.413694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:28.413755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:28.439196  299667 cri.go:89] found id: ""
	I1205 07:47:28.439223  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.439232  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:28.439238  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:28.439323  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:28.463516  299667 cri.go:89] found id: ""
	I1205 07:47:28.463587  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.463610  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:28.463628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:28.463709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:28.489425  299667 cri.go:89] found id: ""
	I1205 07:47:28.489450  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.489459  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:28.489467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:28.489560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:28.516772  299667 cri.go:89] found id: ""
	I1205 07:47:28.516797  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.516806  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:28.516812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:28.516872  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:28.543466  299667 cri.go:89] found id: ""
	I1205 07:47:28.543490  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.543498  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:28.543507  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:28.543564  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:28.568431  299667 cri.go:89] found id: ""
	I1205 07:47:28.568455  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.568463  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:28.568469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:28.568528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:28.593549  299667 cri.go:89] found id: ""
	I1205 07:47:28.593573  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.593581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:28.593590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:28.593601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:28.652330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:28.652364  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:28.665857  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:28.665882  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:28.733864  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:28.733886  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:28.733898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:28.758935  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:28.758971  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:30.102577  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:32.602389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:34.602704  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:31.286625  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:31.297007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:31.297075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:31.324486  299667 cri.go:89] found id: ""
	I1205 07:47:31.324508  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.324517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:31.324523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:31.324585  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:31.367211  299667 cri.go:89] found id: ""
	I1205 07:47:31.367234  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.367242  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:31.367249  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:31.367336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:31.398063  299667 cri.go:89] found id: ""
	I1205 07:47:31.398124  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.398148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:31.398166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:31.398239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:31.430255  299667 cri.go:89] found id: ""
	I1205 07:47:31.430280  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.430288  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:31.430303  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:31.430362  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:31.455188  299667 cri.go:89] found id: ""
	I1205 07:47:31.455213  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.455222  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:31.455228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:31.455304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:31.483709  299667 cri.go:89] found id: ""
	I1205 07:47:31.483734  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.483743  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:31.483754  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:31.483841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:31.511054  299667 cri.go:89] found id: ""
	I1205 07:47:31.511081  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.511090  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:31.511096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:31.511154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:31.536168  299667 cri.go:89] found id: ""
	I1205 07:47:31.536193  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.536202  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:31.536211  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:31.536222  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:31.592031  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:31.592066  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:31.606480  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:31.606506  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:31.673271  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:31.673294  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:31.673309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:31.699030  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:31.699063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:34.230473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:34.241086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:34.241182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:34.266354  299667 cri.go:89] found id: ""
	I1205 07:47:34.266377  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.266386  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:34.266393  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:34.266455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:34.295281  299667 cri.go:89] found id: ""
	I1205 07:47:34.295304  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.295313  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:34.295322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:34.295381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:34.320096  299667 cri.go:89] found id: ""
	I1205 07:47:34.320119  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.320127  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:34.320134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:34.320193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:34.351699  299667 cri.go:89] found id: ""
	I1205 07:47:34.351769  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.351778  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:34.351785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:34.351890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:34.384621  299667 cri.go:89] found id: ""
	I1205 07:47:34.384643  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.384651  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:34.384658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:34.384716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:34.416183  299667 cri.go:89] found id: ""
	I1205 07:47:34.416209  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.416217  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:34.416225  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:34.416303  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:34.442818  299667 cri.go:89] found id: ""
	I1205 07:47:34.442843  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.442852  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:34.442859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:34.442926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:34.467574  299667 cri.go:89] found id: ""
	I1205 07:47:34.467600  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.467608  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:34.467618  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:34.467630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:34.525566  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:34.525599  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:34.538971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:34.539003  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:34.603104  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:34.603123  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:34.603135  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:34.627990  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:34.628024  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:37.102277  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:39.102399  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:37.156741  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:37.168917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:37.168986  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:37.194896  299667 cri.go:89] found id: ""
	I1205 07:47:37.194920  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.194929  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:37.194935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:37.194996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:37.220279  299667 cri.go:89] found id: ""
	I1205 07:47:37.220316  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.220324  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:37.220331  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:37.220402  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:37.244728  299667 cri.go:89] found id: ""
	I1205 07:47:37.244759  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.244768  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:37.244774  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:37.244838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:37.269770  299667 cri.go:89] found id: ""
	I1205 07:47:37.269794  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.269802  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:37.269809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:37.269865  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:37.296343  299667 cri.go:89] found id: ""
	I1205 07:47:37.296367  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.296376  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:37.296382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:37.296444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:37.321553  299667 cri.go:89] found id: ""
	I1205 07:47:37.321576  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.321585  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:37.321592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:37.321651  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:37.356802  299667 cri.go:89] found id: ""
	I1205 07:47:37.356824  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.356834  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:37.356841  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:37.356901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:37.384475  299667 cri.go:89] found id: ""
	I1205 07:47:37.384497  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.384505  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:37.384513  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:37.384524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:37.451184  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:37.451220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:37.465508  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:37.465535  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:37.531461  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:37.531483  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:37.531495  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:37.556492  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:37.556531  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.084953  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:47:41.103193  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:43.602434  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:40.099166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:40.099240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:40.129037  299667 cri.go:89] found id: ""
	I1205 07:47:40.129058  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.129066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:40.129074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:40.129147  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:40.166711  299667 cri.go:89] found id: ""
	I1205 07:47:40.166735  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.166743  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:40.166752  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:40.166813  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:40.192959  299667 cri.go:89] found id: ""
	I1205 07:47:40.192982  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.192991  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:40.192998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:40.193056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:40.218168  299667 cri.go:89] found id: ""
	I1205 07:47:40.218193  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.218202  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:40.218208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:40.218292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:40.243397  299667 cri.go:89] found id: ""
	I1205 07:47:40.243420  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.243428  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:40.243435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:40.243510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:40.268685  299667 cri.go:89] found id: ""
	I1205 07:47:40.268710  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.268718  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:40.268725  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:40.268802  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:40.294417  299667 cri.go:89] found id: ""
	I1205 07:47:40.294443  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.294452  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:40.294480  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:40.294561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:40.321495  299667 cri.go:89] found id: ""
	I1205 07:47:40.321556  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.321570  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:40.321580  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:40.321592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.360106  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:40.360133  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:40.420594  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:40.420627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:40.437302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:40.437332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:40.503821  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:40.503843  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:40.503855  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.028974  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:43.039847  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:43.039922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:43.066179  299667 cri.go:89] found id: ""
	I1205 07:47:43.066202  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.066210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:43.066216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:43.066274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:43.092504  299667 cri.go:89] found id: ""
	I1205 07:47:43.092528  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.092536  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:43.092543  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:43.092610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:43.124060  299667 cri.go:89] found id: ""
	I1205 07:47:43.124086  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.124095  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:43.124102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:43.124166  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:43.154063  299667 cri.go:89] found id: ""
	I1205 07:47:43.154089  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.154098  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:43.154104  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:43.154174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:43.185231  299667 cri.go:89] found id: ""
	I1205 07:47:43.185255  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.185264  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:43.185271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:43.185334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:43.214039  299667 cri.go:89] found id: ""
	I1205 07:47:43.214113  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.214135  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:43.214153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:43.214239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:43.239645  299667 cri.go:89] found id: ""
	I1205 07:47:43.239709  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.239730  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:43.239747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:43.239836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:43.264373  299667 cri.go:89] found id: ""
	I1205 07:47:43.264437  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.264458  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:43.264478  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:43.264514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:43.320427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:43.320464  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:43.334556  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:43.334586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:43.419578  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:43.419600  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:43.419613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.444937  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:43.444974  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:45.602606  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:48.102422  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:45.973125  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:45.983741  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:45.983836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:46.021150  299667 cri.go:89] found id: ""
	I1205 07:47:46.021200  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.021208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:46.021215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:46.021296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:46.046658  299667 cri.go:89] found id: ""
	I1205 07:47:46.046688  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.046725  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:46.046732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:46.046806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:46.072039  299667 cri.go:89] found id: ""
	I1205 07:47:46.072113  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.072136  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:46.072153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:46.072239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:46.117323  299667 cri.go:89] found id: ""
	I1205 07:47:46.117399  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.117423  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:46.117448  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:46.117538  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:46.154886  299667 cri.go:89] found id: ""
	I1205 07:47:46.154912  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.154921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:46.154928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:46.155012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:46.181153  299667 cri.go:89] found id: ""
	I1205 07:47:46.181199  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.181208  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:46.181215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:46.181302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:46.211244  299667 cri.go:89] found id: ""
	I1205 07:47:46.211270  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.211279  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:46.211285  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:46.211346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:46.235089  299667 cri.go:89] found id: ""
	I1205 07:47:46.235164  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.235180  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:46.235191  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:46.235203  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:46.305530  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:46.305551  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:46.305563  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:46.330757  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:46.330792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:46.376750  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:46.376781  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:46.439507  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:46.439542  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:48.953904  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:48.964561  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:48.964628  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:48.987874  299667 cri.go:89] found id: ""
	I1205 07:47:48.987900  299667 logs.go:282] 0 containers: []
	W1205 07:47:48.987909  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:48.987916  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:48.987974  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:49.014890  299667 cri.go:89] found id: ""
	I1205 07:47:49.014966  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.014980  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:49.014988  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:49.015065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:49.040290  299667 cri.go:89] found id: ""
	I1205 07:47:49.040313  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.040321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:49.040328  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:49.040385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:49.065216  299667 cri.go:89] found id: ""
	I1205 07:47:49.065278  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.065287  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:49.065293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:49.065350  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:49.091916  299667 cri.go:89] found id: ""
	I1205 07:47:49.091941  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.091950  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:49.091956  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:49.092015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:49.122078  299667 cri.go:89] found id: ""
	I1205 07:47:49.122101  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.122110  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:49.122117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:49.122174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:49.148378  299667 cri.go:89] found id: ""
	I1205 07:47:49.148400  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.148409  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:49.148415  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:49.148474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:49.181597  299667 cri.go:89] found id: ""
	I1205 07:47:49.181623  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.181639  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:49.181649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:49.181660  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:49.237429  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:49.237462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:49.252514  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:49.252540  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:49.317886  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:49.317908  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:49.317922  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:49.343471  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:49.343503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:50.103132  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:52.602329  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:51.885282  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:51.895713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:51.895806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:51.923558  299667 cri.go:89] found id: ""
	I1205 07:47:51.923582  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.923592  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:51.923599  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:51.923702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:51.952466  299667 cri.go:89] found id: ""
	I1205 07:47:51.952490  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.952499  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:51.952506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:51.952594  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:51.977008  299667 cri.go:89] found id: ""
	I1205 07:47:51.977032  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.977041  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:51.977048  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:51.977130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:52.001855  299667 cri.go:89] found id: ""
	I1205 07:47:52.001880  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.001890  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:52.001918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:52.002010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:52.041299  299667 cri.go:89] found id: ""
	I1205 07:47:52.041367  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.041391  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:52.041410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:52.041490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:52.066425  299667 cri.go:89] found id: ""
	I1205 07:47:52.066448  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.066457  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:52.066484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:52.066567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:52.093389  299667 cri.go:89] found id: ""
	I1205 07:47:52.093415  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.093425  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:52.093431  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:52.093490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:52.131379  299667 cri.go:89] found id: ""
	I1205 07:47:52.131404  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.131412  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:52.131421  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:52.131432  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:52.172215  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:52.172246  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:52.232285  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:52.232317  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:52.246383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:52.246461  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:52.312938  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:52.312999  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:52.313037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:54.839218  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:54.849526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:54.849596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:54.878984  299667 cri.go:89] found id: ""
	I1205 07:47:54.879018  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.879028  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:54.879034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:54.879115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:54.903570  299667 cri.go:89] found id: ""
	I1205 07:47:54.903593  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.903603  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:54.903609  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:54.903668  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:54.928679  299667 cri.go:89] found id: ""
	I1205 07:47:54.928701  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.928710  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:54.928716  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:54.928772  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:54.957443  299667 cri.go:89] found id: ""
	I1205 07:47:54.957465  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.957474  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:54.957481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:54.957539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:54.981997  299667 cri.go:89] found id: ""
	I1205 07:47:54.982022  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.982031  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:54.982037  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:54.982097  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:55.019658  299667 cri.go:89] found id: ""
	I1205 07:47:55.019684  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.019694  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:55.019702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:55.019774  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:55.045945  299667 cri.go:89] found id: ""
	I1205 07:47:55.045968  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.045977  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:55.045982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:55.046047  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:55.070660  299667 cri.go:89] found id: ""
	I1205 07:47:55.070682  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.070691  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:55.070753  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:55.070772  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:55.103139  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:57.602889  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:55.155877  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:55.155904  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:55.155918  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:55.182506  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:55.182538  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:55.209519  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:55.209545  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:55.268283  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:55.268315  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:57.781956  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:57.792419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:57.792511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:57.816805  299667 cri.go:89] found id: ""
	I1205 07:47:57.816830  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.816839  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:57.816845  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:57.816907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:57.844943  299667 cri.go:89] found id: ""
	I1205 07:47:57.844967  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.844975  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:57.844982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:57.845041  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:57.869698  299667 cri.go:89] found id: ""
	I1205 07:47:57.869720  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.869728  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:57.869735  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:57.869792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:57.894855  299667 cri.go:89] found id: ""
	I1205 07:47:57.894881  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.894889  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:57.894896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:57.895015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:57.919181  299667 cri.go:89] found id: ""
	I1205 07:47:57.919207  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.919217  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:57.919223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:57.919284  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:57.947523  299667 cri.go:89] found id: ""
	I1205 07:47:57.947545  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.947553  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:57.947559  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:57.947617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:57.972190  299667 cri.go:89] found id: ""
	I1205 07:47:57.972212  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.972221  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:57.972227  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:57.972337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:57.995598  299667 cri.go:89] found id: ""
	I1205 07:47:57.995620  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.995628  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:57.995637  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:57.995648  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:58.053180  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:58.053214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:58.066958  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:58.067035  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:58.148853  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:58.148871  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:58.148884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:58.177078  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:58.177111  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:00.102486  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:02.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:04.602418  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:00.709764  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:00.720636  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:00.720709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:00.745332  299667 cri.go:89] found id: ""
	I1205 07:48:00.745357  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.745367  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:00.745377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:00.745446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:00.769743  299667 cri.go:89] found id: ""
	I1205 07:48:00.769766  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.769774  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:00.769780  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:00.769838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:00.793723  299667 cri.go:89] found id: ""
	I1205 07:48:00.793747  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.793755  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:00.793761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:00.793849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:00.822270  299667 cri.go:89] found id: ""
	I1205 07:48:00.822295  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.822304  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:00.822311  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:00.822372  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:00.846055  299667 cri.go:89] found id: ""
	I1205 07:48:00.846079  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.846088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:00.846094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:00.846154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:00.875896  299667 cri.go:89] found id: ""
	I1205 07:48:00.875927  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.875938  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:00.875945  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:00.876005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:00.901376  299667 cri.go:89] found id: ""
	I1205 07:48:00.901401  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.901410  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:00.901417  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:00.901478  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:00.931038  299667 cri.go:89] found id: ""
	I1205 07:48:00.931062  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.931070  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:00.931080  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:00.931121  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:00.997183  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:00.997205  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:00.997217  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:01.023514  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:01.023552  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:01.051665  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:01.051694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:01.112451  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:01.112528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:03.628641  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:03.640043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:03.640115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:03.668895  299667 cri.go:89] found id: ""
	I1205 07:48:03.668923  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.668932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:03.668939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:03.669005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:03.698851  299667 cri.go:89] found id: ""
	I1205 07:48:03.698873  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.698882  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:03.698888  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:03.698946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:03.724736  299667 cri.go:89] found id: ""
	I1205 07:48:03.724758  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.724767  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:03.724773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:03.724831  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:03.751007  299667 cri.go:89] found id: ""
	I1205 07:48:03.751030  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.751038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:03.751072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:03.751143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:03.779130  299667 cri.go:89] found id: ""
	I1205 07:48:03.779153  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.779162  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:03.779168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:03.779226  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:03.808717  299667 cri.go:89] found id: ""
	I1205 07:48:03.808738  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.808798  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:03.808812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:03.808893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:03.834648  299667 cri.go:89] found id: ""
	I1205 07:48:03.834745  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.834769  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:03.834790  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:03.834894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:03.860266  299667 cri.go:89] found id: ""
	I1205 07:48:03.860290  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.860298  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:03.860307  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:03.860326  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:03.925650  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:03.925672  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:03.925684  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:03.951836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:03.951866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:03.981147  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:03.981199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:04.037271  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:04.037308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:48:07.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:09.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:06.551820  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:06.562850  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:06.562922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:06.588022  299667 cri.go:89] found id: ""
	I1205 07:48:06.588044  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.588052  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:06.588059  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:06.588121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:06.618654  299667 cri.go:89] found id: ""
	I1205 07:48:06.618677  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.618687  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:06.618693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:06.618760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:06.654167  299667 cri.go:89] found id: ""
	I1205 07:48:06.654188  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.654197  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:06.654203  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:06.654261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:06.681234  299667 cri.go:89] found id: ""
	I1205 07:48:06.681306  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.681327  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:06.681345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:06.681437  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:06.705922  299667 cri.go:89] found id: ""
	I1205 07:48:06.705946  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.705955  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:06.705962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:06.706044  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:06.730881  299667 cri.go:89] found id: ""
	I1205 07:48:06.730913  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.730924  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:06.730930  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:06.730987  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:06.755636  299667 cri.go:89] found id: ""
	I1205 07:48:06.755661  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.755670  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:06.755676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:06.755743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:06.780702  299667 cri.go:89] found id: ""
	I1205 07:48:06.780735  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.780743  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:06.780753  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:06.780764  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:06.841265  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:06.841303  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:06.854661  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:06.854686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:06.918298  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:06.918316  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:06.918328  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:06.943239  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:06.943274  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.471658  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:09.482526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:09.482598  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:09.507658  299667 cri.go:89] found id: ""
	I1205 07:48:09.507683  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.507692  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:09.507699  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:09.507765  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:09.538688  299667 cri.go:89] found id: ""
	I1205 07:48:09.538744  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.538758  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:09.538765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:09.538835  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:09.564016  299667 cri.go:89] found id: ""
	I1205 07:48:09.564041  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.564050  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:09.564056  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:09.564118  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:09.595020  299667 cri.go:89] found id: ""
	I1205 07:48:09.595047  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.595056  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:09.595062  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:09.595170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:09.627725  299667 cri.go:89] found id: ""
	I1205 07:48:09.627747  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.627756  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:09.627763  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:09.627821  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:09.661208  299667 cri.go:89] found id: ""
	I1205 07:48:09.661273  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.661290  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:09.661297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:09.661371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:09.686173  299667 cri.go:89] found id: ""
	I1205 07:48:09.686207  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.686216  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:09.686223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:09.686291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:09.710385  299667 cri.go:89] found id: ""
	I1205 07:48:09.710417  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.710426  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:09.710435  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:09.710447  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:09.724065  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:09.724089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:09.786352  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:09.786371  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:09.786383  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:09.814782  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:09.814823  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.845678  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:09.845705  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:11.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:14.102692  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:12.403586  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:12.414137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:12.414208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:12.443644  299667 cri.go:89] found id: ""
	I1205 07:48:12.443666  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.443677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:12.443683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:12.443743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:12.468970  299667 cri.go:89] found id: ""
	I1205 07:48:12.468992  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.469001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:12.469007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:12.469073  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:12.495420  299667 cri.go:89] found id: ""
	I1205 07:48:12.495441  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.495449  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:12.495455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:12.495513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:12.520821  299667 cri.go:89] found id: ""
	I1205 07:48:12.520848  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.520857  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:12.520862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:12.520920  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:12.546738  299667 cri.go:89] found id: ""
	I1205 07:48:12.546767  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.546776  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:12.546782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:12.546845  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:12.571663  299667 cri.go:89] found id: ""
	I1205 07:48:12.571687  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.571696  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:12.571702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:12.571759  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:12.600237  299667 cri.go:89] found id: ""
	I1205 07:48:12.600263  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.600272  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:12.600279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:12.600336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:12.645073  299667 cri.go:89] found id: ""
	I1205 07:48:12.645108  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.645116  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:12.645126  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:12.645137  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:12.661987  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:12.662020  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:12.726418  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:12.726442  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:12.726455  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:12.751208  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:12.751243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:12.780690  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:12.780718  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:16.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:18.602693  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:15.336959  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:15.349150  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:15.349233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:15.379055  299667 cri.go:89] found id: ""
	I1205 07:48:15.379075  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.379084  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:15.379090  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:15.379148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:15.411812  299667 cri.go:89] found id: ""
	I1205 07:48:15.411832  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.411841  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:15.411849  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:15.411907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:15.436056  299667 cri.go:89] found id: ""
	I1205 07:48:15.436077  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.436085  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:15.436091  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:15.436152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:15.461323  299667 cri.go:89] found id: ""
	I1205 07:48:15.461345  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.461354  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:15.461360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:15.461416  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:15.490552  299667 cri.go:89] found id: ""
	I1205 07:48:15.490577  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.490586  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:15.490593  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:15.490682  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:15.519448  299667 cri.go:89] found id: ""
	I1205 07:48:15.519471  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.519480  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:15.519487  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:15.519544  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:15.548923  299667 cri.go:89] found id: ""
	I1205 07:48:15.548947  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.548956  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:15.548962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:15.549024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:15.574804  299667 cri.go:89] found id: ""
	I1205 07:48:15.574828  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.574839  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:15.574847  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:15.574878  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:15.634392  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:15.634428  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:15.651971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:15.651998  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:15.719384  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:15.719407  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:15.719418  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:15.743909  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:15.743941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.273819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:18.284902  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:18.284975  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:18.310770  299667 cri.go:89] found id: ""
	I1205 07:48:18.310793  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.310802  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:18.310809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:18.310868  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:18.335509  299667 cri.go:89] found id: ""
	I1205 07:48:18.335530  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.335538  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:18.335544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:18.335602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:18.367849  299667 cri.go:89] found id: ""
	I1205 07:48:18.367875  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.367884  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:18.367890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:18.367947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:18.397008  299667 cri.go:89] found id: ""
	I1205 07:48:18.397037  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.397046  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:18.397053  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:18.397115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:18.422994  299667 cri.go:89] found id: ""
	I1205 07:48:18.423017  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.423035  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:18.423043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:18.423109  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:18.447590  299667 cri.go:89] found id: ""
	I1205 07:48:18.447666  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.447689  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:18.447713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:18.447801  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:18.472279  299667 cri.go:89] found id: ""
	I1205 07:48:18.472353  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.472375  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:18.472392  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:18.472477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:18.497432  299667 cri.go:89] found id: ""
	I1205 07:48:18.497454  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.497463  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:18.497471  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:18.497484  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:18.522163  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:18.522196  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.550354  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:18.550378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:18.605871  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:18.605944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:18.623406  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:18.623435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:18.692830  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:20.603254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:23.103214  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:21.193117  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:21.203367  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:21.203430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:21.228233  299667 cri.go:89] found id: ""
	I1205 07:48:21.228257  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.228265  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:21.228272  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:21.228331  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:21.256427  299667 cri.go:89] found id: ""
	I1205 07:48:21.256448  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.256456  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:21.256462  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:21.256523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:21.281113  299667 cri.go:89] found id: ""
	I1205 07:48:21.281136  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.281145  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:21.281151  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:21.281238  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:21.305777  299667 cri.go:89] found id: ""
	I1205 07:48:21.305798  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.305806  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:21.305812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:21.305869  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:21.335558  299667 cri.go:89] found id: ""
	I1205 07:48:21.335622  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.335645  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:21.335662  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:21.335745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:21.374161  299667 cri.go:89] found id: ""
	I1205 07:48:21.374230  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.374257  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:21.374275  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:21.374358  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:21.403378  299667 cri.go:89] found id: ""
	I1205 07:48:21.403442  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.403464  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:21.403481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:21.403561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:21.428681  299667 cri.go:89] found id: ""
	I1205 07:48:21.428707  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.428717  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:21.428725  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:21.428736  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:21.485472  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:21.485503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:21.499440  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:21.499521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:21.564057  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:21.564088  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:21.564102  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:21.588591  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:21.588627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.133263  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:24.145210  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:24.145292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:24.172487  299667 cri.go:89] found id: ""
	I1205 07:48:24.172509  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.172517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:24.172523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:24.172582  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:24.197589  299667 cri.go:89] found id: ""
	I1205 07:48:24.197612  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.197634  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:24.197641  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:24.197727  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:24.232698  299667 cri.go:89] found id: ""
	I1205 07:48:24.232773  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.232803  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:24.232821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:24.232927  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:24.261831  299667 cri.go:89] found id: ""
	I1205 07:48:24.261854  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.261863  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:24.261870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:24.261932  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:24.290390  299667 cri.go:89] found id: ""
	I1205 07:48:24.290412  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.290420  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:24.290426  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:24.290486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:24.314257  299667 cri.go:89] found id: ""
	I1205 07:48:24.314327  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.314360  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:24.314383  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:24.314475  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:24.338446  299667 cri.go:89] found id: ""
	I1205 07:48:24.338469  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.338477  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:24.338484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:24.338542  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:24.366265  299667 cri.go:89] found id: ""
	I1205 07:48:24.366302  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.366314  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:24.366323  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:24.366335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:24.398722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:24.398759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.430842  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:24.430872  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:24.486913  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:24.486947  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:24.500309  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:24.500333  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:24.571107  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:25.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:28.102336  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:27.072799  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:27.082983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:27.083049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:27.106973  299667 cri.go:89] found id: ""
	I1205 07:48:27.106997  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.107005  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:27.107012  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:27.107072  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:27.131580  299667 cri.go:89] found id: ""
	I1205 07:48:27.131604  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.131613  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:27.131619  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:27.131679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:27.156330  299667 cri.go:89] found id: ""
	I1205 07:48:27.156356  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.156364  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:27.156371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:27.156434  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:27.180350  299667 cri.go:89] found id: ""
	I1205 07:48:27.180375  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.180384  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:27.180391  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:27.180449  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:27.204756  299667 cri.go:89] found id: ""
	I1205 07:48:27.204779  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.204787  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:27.204800  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:27.204858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:27.232181  299667 cri.go:89] found id: ""
	I1205 07:48:27.232207  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.232216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:27.232223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:27.232299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:27.258059  299667 cri.go:89] found id: ""
	I1205 07:48:27.258086  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.258095  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:27.258102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:27.258165  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:27.281695  299667 cri.go:89] found id: ""
	I1205 07:48:27.281717  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.281725  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:27.281734  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:27.281746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:27.294855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:27.294880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:27.362846  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:27.362868  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:27.362880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:27.389761  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:27.389791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:27.422138  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:27.422165  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:29.980506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:29.990724  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:29.990791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:30.035211  299667 cri.go:89] found id: ""
	I1205 07:48:30.035238  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.035248  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:30.035256  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:30.035326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:30.063908  299667 cri.go:89] found id: ""
	I1205 07:48:30.063944  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.063953  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:30.063960  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:30.064034  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	W1205 07:48:30.103232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:32.602298  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:34.602332  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:30.095785  299667 cri.go:89] found id: ""
	I1205 07:48:30.095860  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.095883  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:30.095908  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:30.096002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:30.123133  299667 cri.go:89] found id: ""
	I1205 07:48:30.123156  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.123166  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:30.123172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:30.123235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:30.149862  299667 cri.go:89] found id: ""
	I1205 07:48:30.149885  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.149894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:30.149901  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:30.150013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:30.175817  299667 cri.go:89] found id: ""
	I1205 07:48:30.175883  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.175903  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:30.175920  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:30.176005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:30.201607  299667 cri.go:89] found id: ""
	I1205 07:48:30.201631  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.201640  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:30.201646  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:30.201711  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:30.227899  299667 cri.go:89] found id: ""
	I1205 07:48:30.227922  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.227931  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:30.227940  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:30.227952  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:30.241708  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:30.241742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:30.309566  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:30.309584  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:30.309597  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:30.334740  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:30.334771  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:30.378494  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:30.378524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:32.939968  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:32.950759  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:32.950832  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:32.978406  299667 cri.go:89] found id: ""
	I1205 07:48:32.978430  299667 logs.go:282] 0 containers: []
	W1205 07:48:32.978438  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:32.978454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:32.978513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:33.008532  299667 cri.go:89] found id: ""
	I1205 07:48:33.008559  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.008568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:33.008574  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:33.008650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:33.033972  299667 cri.go:89] found id: ""
	I1205 07:48:33.033997  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.034005  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:33.034013  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:33.034081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:33.059992  299667 cri.go:89] found id: ""
	I1205 07:48:33.060014  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.060023  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:33.060029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:33.060094  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:33.090354  299667 cri.go:89] found id: ""
	I1205 07:48:33.090379  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.090387  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:33.090395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:33.090454  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:33.114706  299667 cri.go:89] found id: ""
	I1205 07:48:33.114735  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.114744  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:33.114751  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:33.114809  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:33.140456  299667 cri.go:89] found id: ""
	I1205 07:48:33.140481  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.140490  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:33.140496  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:33.140557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:33.169438  299667 cri.go:89] found id: ""
	I1205 07:48:33.169461  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.169469  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:33.169478  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:33.169490  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:33.195155  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:33.195189  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:33.221590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:33.221617  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:33.277078  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:33.277110  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:33.290419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:33.290445  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:33.357621  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:36.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:38.602933  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:35.857840  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:35.869455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:35.869525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:35.904563  299667 cri.go:89] found id: ""
	I1205 07:48:35.904585  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.904594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:35.904601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:35.904664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:35.932592  299667 cri.go:89] found id: ""
	I1205 07:48:35.932613  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.932622  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:35.932628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:35.932690  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:35.961011  299667 cri.go:89] found id: ""
	I1205 07:48:35.961033  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.961048  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:35.961055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:35.961121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:35.988109  299667 cri.go:89] found id: ""
	I1205 07:48:35.988131  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.988139  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:35.988146  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:35.988212  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:36.021866  299667 cri.go:89] found id: ""
	I1205 07:48:36.021894  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.021903  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:36.021910  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:36.021980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:36.053675  299667 cri.go:89] found id: ""
	I1205 07:48:36.053697  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.053706  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:36.053713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:36.053773  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:36.088227  299667 cri.go:89] found id: ""
	I1205 07:48:36.088252  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.088261  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:36.088268  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:36.088330  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:36.114723  299667 cri.go:89] found id: ""
	I1205 07:48:36.114753  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.114762  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:36.114772  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:36.114792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:36.130077  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:36.130105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:36.199710  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:36.199733  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:36.199746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:36.224920  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:36.224953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:36.260346  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:36.260373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:38.818746  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:38.829029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:38.829103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:38.861723  299667 cri.go:89] found id: ""
	I1205 07:48:38.861746  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.861755  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:38.861761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:38.861827  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:38.889749  299667 cri.go:89] found id: ""
	I1205 07:48:38.889772  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.889781  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:38.889787  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:38.889849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:38.925308  299667 cri.go:89] found id: ""
	I1205 07:48:38.925337  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.925346  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:38.925352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:38.925412  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:38.955710  299667 cri.go:89] found id: ""
	I1205 07:48:38.955732  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.955740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:38.955746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:38.955803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:38.980907  299667 cri.go:89] found id: ""
	I1205 07:48:38.980934  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.980943  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:38.980951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:38.981013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:39.011368  299667 cri.go:89] found id: ""
	I1205 07:48:39.011398  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.011409  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:39.011416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:39.011489  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:39.037693  299667 cri.go:89] found id: ""
	I1205 07:48:39.037719  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.037727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:39.037734  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:39.037806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:39.063915  299667 cri.go:89] found id: ""
	I1205 07:48:39.063940  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.063949  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:39.063957  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:39.063969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:39.120923  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:39.120960  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:39.134276  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:39.134302  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:39.194044  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:39.194064  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:39.194076  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:39.218536  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:39.218569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:41.102495  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:43.102732  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:41.747231  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:41.758180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:41.758258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:41.785400  299667 cri.go:89] found id: ""
	I1205 07:48:41.785426  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.785435  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:41.785442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:41.785509  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:41.817641  299667 cri.go:89] found id: ""
	I1205 07:48:41.817667  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.817676  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:41.817683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:41.817747  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:41.842820  299667 cri.go:89] found id: ""
	I1205 07:48:41.842846  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.842855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:41.842869  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:41.842933  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:41.880166  299667 cri.go:89] found id: ""
	I1205 07:48:41.880194  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.880208  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:41.880214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:41.880291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:41.911193  299667 cri.go:89] found id: ""
	I1205 07:48:41.911258  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.911273  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:41.911281  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:41.911337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:41.935720  299667 cri.go:89] found id: ""
	I1205 07:48:41.935745  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.935754  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:41.935761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:41.935823  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:41.962907  299667 cri.go:89] found id: ""
	I1205 07:48:41.962976  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.962992  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:41.962998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:41.963065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:41.991087  299667 cri.go:89] found id: ""
	I1205 07:48:41.991113  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.991121  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:41.991130  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:41.991140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:42.070025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:42.070073  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:42.086499  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:42.086528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:42.164053  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:42.164130  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:42.164162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:42.192298  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:42.192342  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:44.734604  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:44.745356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:44.745423  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:44.770206  299667 cri.go:89] found id: ""
	I1205 07:48:44.770230  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.770239  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:44.770247  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:44.770305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:44.796086  299667 cri.go:89] found id: ""
	I1205 07:48:44.796109  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.796118  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:44.796124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:44.796182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:44.822053  299667 cri.go:89] found id: ""
	I1205 07:48:44.822125  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.822148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:44.822167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:44.822258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:44.855227  299667 cri.go:89] found id: ""
	I1205 07:48:44.855298  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.855320  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:44.855339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:44.855422  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:44.884787  299667 cri.go:89] found id: ""
	I1205 07:48:44.884859  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.885835  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:44.885875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:44.885967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:44.922015  299667 cri.go:89] found id: ""
	I1205 07:48:44.922040  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.922048  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:44.922055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:44.922120  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:44.946942  299667 cri.go:89] found id: ""
	I1205 07:48:44.946979  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.946988  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:44.946995  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:44.947056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:44.972229  299667 cri.go:89] found id: ""
	I1205 07:48:44.972253  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.972262  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:44.972270  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:44.972280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:44.997401  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:44.997434  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:45.054576  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:45.054602  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:45.102947  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:47.602661  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:45.133742  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:45.133782  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:45.155399  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:45.155496  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:45.257582  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:47.759254  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:47.770034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:47.770107  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:47.799850  299667 cri.go:89] found id: ""
	I1205 07:48:47.799873  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.799882  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:47.799889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:47.799947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:47.824989  299667 cri.go:89] found id: ""
	I1205 07:48:47.825014  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.825022  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:47.825028  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:47.825089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:47.857967  299667 cri.go:89] found id: ""
	I1205 07:48:47.857993  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.858002  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:47.858008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:47.858065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:47.890800  299667 cri.go:89] found id: ""
	I1205 07:48:47.890833  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.890842  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:47.890851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:47.890911  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:47.921850  299667 cri.go:89] found id: ""
	I1205 07:48:47.921874  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.921883  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:47.921890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:47.921950  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:47.946404  299667 cri.go:89] found id: ""
	I1205 07:48:47.946426  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.946435  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:47.946442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:47.946501  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:47.972095  299667 cri.go:89] found id: ""
	I1205 07:48:47.972117  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.972125  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:47.972131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:47.972189  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:47.996555  299667 cri.go:89] found id: ""
	I1205 07:48:47.996577  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.996585  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:47.996594  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:47.996605  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:48.054087  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:48.054122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:48.069006  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:48.069038  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:48.132946  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:48.132968  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:48.132981  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:48.158949  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:48.158986  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:50.102346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:52.103160  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:54.602949  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:50.687838  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:50.698642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:50.698712  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:50.725092  299667 cri.go:89] found id: ""
	I1205 07:48:50.725113  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.725121  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:50.725128  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:50.725208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:50.750131  299667 cri.go:89] found id: ""
	I1205 07:48:50.750153  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.750161  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:50.750167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:50.750233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:50.774733  299667 cri.go:89] found id: ""
	I1205 07:48:50.774755  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.774765  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:50.774773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:50.774858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:50.803492  299667 cri.go:89] found id: ""
	I1205 07:48:50.803514  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.803524  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:50.803531  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:50.803596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:50.828915  299667 cri.go:89] found id: ""
	I1205 07:48:50.828938  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.828947  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:50.828953  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:50.829022  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:50.862065  299667 cri.go:89] found id: ""
	I1205 07:48:50.862090  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.862098  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:50.862105  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:50.862168  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:50.888327  299667 cri.go:89] found id: ""
	I1205 07:48:50.888356  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.888365  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:50.888371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:50.888432  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:50.917551  299667 cri.go:89] found id: ""
	I1205 07:48:50.917583  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.917592  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:50.917601  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:50.917613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:50.976691  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:50.976725  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:50.990259  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:50.990285  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:51.057592  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:51.057614  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:51.057628  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:51.088874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:51.088916  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.619589  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:53.630457  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:53.630521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:53.662396  299667 cri.go:89] found id: ""
	I1205 07:48:53.662420  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.662429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:53.662435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:53.662493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:53.687365  299667 cri.go:89] found id: ""
	I1205 07:48:53.687393  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.687402  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:53.687408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:53.687469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:53.711757  299667 cri.go:89] found id: ""
	I1205 07:48:53.711782  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.711791  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:53.711798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:53.711893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:53.735695  299667 cri.go:89] found id: ""
	I1205 07:48:53.735721  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.735730  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:53.735736  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:53.735793  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:53.763008  299667 cri.go:89] found id: ""
	I1205 07:48:53.763032  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.763041  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:53.763047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:53.763104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:53.791424  299667 cri.go:89] found id: ""
	I1205 07:48:53.791498  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.791520  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:53.791537  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:53.791617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:53.815855  299667 cri.go:89] found id: ""
	I1205 07:48:53.815876  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.815884  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:53.815890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:53.815946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:53.839524  299667 cri.go:89] found id: ""
	I1205 07:48:53.839548  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.839557  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:53.839565  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:53.839577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.884515  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:53.884591  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:53.947646  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:53.947682  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:53.961152  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:53.961211  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:54.031297  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:54.031321  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:54.031335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:48:57.102570  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:59.102902  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:56.557021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:56.567576  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:56.567694  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:56.596257  299667 cri.go:89] found id: ""
	I1205 07:48:56.596291  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.596300  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:56.596306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:56.596381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:56.627549  299667 cri.go:89] found id: ""
	I1205 07:48:56.627575  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.627583  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:56.627590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:56.627649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:56.661291  299667 cri.go:89] found id: ""
	I1205 07:48:56.661313  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.661321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:56.661332  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:56.661391  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:56.687435  299667 cri.go:89] found id: ""
	I1205 07:48:56.687462  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.687471  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:56.687477  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:56.687540  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:56.712238  299667 cri.go:89] found id: ""
	I1205 07:48:56.712261  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.712271  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:56.712277  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:56.712340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:56.736638  299667 cri.go:89] found id: ""
	I1205 07:48:56.736663  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.736672  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:56.736690  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:56.736748  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:56.760967  299667 cri.go:89] found id: ""
	I1205 07:48:56.761001  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.761010  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:56.761016  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:56.761075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:56.784912  299667 cri.go:89] found id: ""
	I1205 07:48:56.784939  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.784947  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:56.784958  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:56.784969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:56.808701  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:56.808734  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:56.835856  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:56.835884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:56.896082  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:56.896154  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:56.914235  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:56.914310  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:56.981742  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.483411  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:59.494080  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:59.494149  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:59.521983  299667 cri.go:89] found id: ""
	I1205 07:48:59.522007  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.522015  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:59.522023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:59.522081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:59.547605  299667 cri.go:89] found id: ""
	I1205 07:48:59.547637  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.547646  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:59.547652  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:59.547718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:59.572816  299667 cri.go:89] found id: ""
	I1205 07:48:59.572839  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.572847  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:59.572854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:59.572909  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:59.598049  299667 cri.go:89] found id: ""
	I1205 07:48:59.598070  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.598078  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:59.598085  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:59.598145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:59.624907  299667 cri.go:89] found id: ""
	I1205 07:48:59.624928  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.624937  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:59.624943  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:59.625001  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:59.651926  299667 cri.go:89] found id: ""
	I1205 07:48:59.651947  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.651955  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:59.651962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:59.652019  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:59.680003  299667 cri.go:89] found id: ""
	I1205 07:48:59.680080  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.680103  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:59.680120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:59.680228  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:59.705437  299667 cri.go:89] found id: ""
	I1205 07:48:59.705465  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.705474  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:59.705483  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:59.705493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:59.763111  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:59.763142  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:59.777300  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:59.777368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:59.842575  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.842643  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:59.842663  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:59.869833  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:59.869908  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:01.602955  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:04.102698  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:02.402084  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:02.412782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:02.412851  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:02.438256  299667 cri.go:89] found id: ""
	I1205 07:49:02.438279  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.438287  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:02.438294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:02.438352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:02.465899  299667 cri.go:89] found id: ""
	I1205 07:49:02.465926  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.465935  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:02.465942  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:02.466005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:02.490481  299667 cri.go:89] found id: ""
	I1205 07:49:02.490503  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.490513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:02.490519  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:02.490586  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:02.516169  299667 cri.go:89] found id: ""
	I1205 07:49:02.516196  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.516205  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:02.516211  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:02.516271  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:02.541403  299667 cri.go:89] found id: ""
	I1205 07:49:02.541429  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.541439  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:02.541445  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:02.541507  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:02.566995  299667 cri.go:89] found id: ""
	I1205 07:49:02.567017  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.567025  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:02.567032  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:02.567099  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:02.597621  299667 cri.go:89] found id: ""
	I1205 07:49:02.597644  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.597652  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:02.597657  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:02.597716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:02.628924  299667 cri.go:89] found id: ""
	I1205 07:49:02.628951  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.628960  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:02.628969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:02.628980  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:02.693315  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:02.693348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:02.707066  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:02.707162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:02.771707  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:02.771729  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:02.771742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:02.797113  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:02.797145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:06.603033  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:09.102351  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:05.326530  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:05.336990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:05.337057  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:05.360427  299667 cri.go:89] found id: ""
	I1205 07:49:05.360451  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.360460  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:05.360466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:05.360525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:05.384196  299667 cri.go:89] found id: ""
	I1205 07:49:05.384222  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.384230  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:05.384237  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:05.384299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:05.410321  299667 cri.go:89] found id: ""
	I1205 07:49:05.410344  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.410352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:05.410358  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:05.410417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:05.433726  299667 cri.go:89] found id: ""
	I1205 07:49:05.433793  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.433815  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:05.433833  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:05.433921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:05.458853  299667 cri.go:89] found id: ""
	I1205 07:49:05.458924  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.458940  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:05.458947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:05.459008  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:05.482445  299667 cri.go:89] found id: ""
	I1205 07:49:05.482514  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.482529  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:05.482538  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:05.482610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:05.507192  299667 cri.go:89] found id: ""
	I1205 07:49:05.507260  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.507282  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:05.507300  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:05.507393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:05.532405  299667 cri.go:89] found id: ""
	I1205 07:49:05.532439  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.532448  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:05.532459  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:05.532470  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:05.587713  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:05.587744  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:05.600994  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:05.601062  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:05.676675  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:05.676745  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:05.676770  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:05.700917  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:05.700948  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.230743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:08.241254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:08.241324  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:08.265687  299667 cri.go:89] found id: ""
	I1205 07:49:08.265765  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.265781  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:08.265789  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:08.265873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:08.291182  299667 cri.go:89] found id: ""
	I1205 07:49:08.291212  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.291222  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:08.291230  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:08.291288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:08.316404  299667 cri.go:89] found id: ""
	I1205 07:49:08.316431  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.316439  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:08.316446  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:08.316503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:08.342004  299667 cri.go:89] found id: ""
	I1205 07:49:08.342030  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.342038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:08.342044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:08.342103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:08.370679  299667 cri.go:89] found id: ""
	I1205 07:49:08.370700  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.370708  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:08.370715  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:08.370791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:08.398788  299667 cri.go:89] found id: ""
	I1205 07:49:08.398848  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.398880  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:08.398896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:08.398967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:08.427499  299667 cri.go:89] found id: ""
	I1205 07:49:08.427532  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.427552  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:08.427560  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:08.427627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:08.455982  299667 cri.go:89] found id: ""
	I1205 07:49:08.456008  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.456016  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:08.456025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:08.456037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:08.469660  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:08.469687  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:08.534660  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:08.534684  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:08.534697  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:08.560195  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:08.560228  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.590035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:08.590061  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:49:11.102705  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:13.103312  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:11.150392  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:11.161108  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:11.161194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:11.185243  299667 cri.go:89] found id: ""
	I1205 07:49:11.185264  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.185273  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:11.185280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:11.185338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:11.208758  299667 cri.go:89] found id: ""
	I1205 07:49:11.208797  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.208806  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:11.208815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:11.208884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:11.235054  299667 cri.go:89] found id: ""
	I1205 07:49:11.235077  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.235086  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:11.235092  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:11.235157  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:11.259045  299667 cri.go:89] found id: ""
	I1205 07:49:11.259068  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.259076  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:11.259082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:11.259143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:11.288257  299667 cri.go:89] found id: ""
	I1205 07:49:11.288282  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.288291  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:11.288298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:11.288354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:11.312884  299667 cri.go:89] found id: ""
	I1205 07:49:11.312906  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.312914  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:11.312922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:11.312978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:11.341317  299667 cri.go:89] found id: ""
	I1205 07:49:11.341340  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.341348  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:11.341354  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:11.341411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:11.365207  299667 cri.go:89] found id: ""
	I1205 07:49:11.365234  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.365243  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:11.365260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:11.365271  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:11.423587  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:11.423619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:11.437723  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:11.437796  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:11.504822  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:11.504896  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:11.504935  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:11.529753  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:11.529791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:14.059148  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:14.069586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:14.069676  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:14.103804  299667 cri.go:89] found id: ""
	I1205 07:49:14.103828  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.103837  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:14.103843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:14.103901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:14.135010  299667 cri.go:89] found id: ""
	I1205 07:49:14.135031  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.135040  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:14.135045  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:14.135104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:14.170829  299667 cri.go:89] found id: ""
	I1205 07:49:14.170851  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.170859  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:14.170865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:14.170926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:14.199693  299667 cri.go:89] found id: ""
	I1205 07:49:14.199715  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.199724  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:14.199730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:14.199789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:14.223902  299667 cri.go:89] found id: ""
	I1205 07:49:14.223924  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.223931  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:14.223937  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:14.224003  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:14.247854  299667 cri.go:89] found id: ""
	I1205 07:49:14.247926  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.247950  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:14.247969  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:14.248063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:14.272146  299667 cri.go:89] found id: ""
	I1205 07:49:14.272219  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.272250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:14.272270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:14.272375  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:14.297307  299667 cri.go:89] found id: ""
	I1205 07:49:14.297377  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.297404  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:14.297421  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:14.297436  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:14.352148  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:14.352181  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:14.365391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:14.365420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:14.429045  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:14.429068  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:14.429080  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:14.453460  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:14.453494  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:15.602762  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:17.602959  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:16.984086  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:16.994499  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:16.994567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:17.022900  299667 cri.go:89] found id: ""
	I1205 07:49:17.022923  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.022932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:17.022939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:17.022997  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:17.047244  299667 cri.go:89] found id: ""
	I1205 07:49:17.047318  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.047332  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:17.047339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:17.047415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:17.070683  299667 cri.go:89] found id: ""
	I1205 07:49:17.070716  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.070725  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:17.070732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:17.070811  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:17.104238  299667 cri.go:89] found id: ""
	I1205 07:49:17.104310  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.104332  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:17.104351  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:17.104433  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:17.130787  299667 cri.go:89] found id: ""
	I1205 07:49:17.130867  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.130890  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:17.130907  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:17.131014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:17.159177  299667 cri.go:89] found id: ""
	I1205 07:49:17.159212  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.159221  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:17.159228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:17.159293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:17.187127  299667 cri.go:89] found id: ""
	I1205 07:49:17.187148  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.187157  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:17.187168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:17.187225  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:17.214608  299667 cri.go:89] found id: ""
	I1205 07:49:17.214633  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.214641  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:17.214650  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:17.214690  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:17.227937  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:17.227964  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:17.290517  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:17.290581  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:17.290600  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:17.315039  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:17.315074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:17.343285  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:17.343348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:19.899406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:19.910597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:19.910679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:19.935640  299667 cri.go:89] found id: ""
	I1205 07:49:19.935664  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.935673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:19.935679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:19.935736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:19.959309  299667 cri.go:89] found id: ""
	I1205 07:49:19.959336  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.959345  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:19.959352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:19.959418  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:19.982862  299667 cri.go:89] found id: ""
	I1205 07:49:19.982884  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.982893  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:19.982899  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:19.982957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:20.016784  299667 cri.go:89] found id: ""
	I1205 07:49:20.016810  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.016819  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:20.016826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:20.016893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:20.044555  299667 cri.go:89] found id: ""
	I1205 07:49:20.044580  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.044590  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:20.044597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:20.044657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:20.080570  299667 cri.go:89] found id: ""
	I1205 07:49:20.080595  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.080603  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:20.080610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:20.080689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1205 07:49:20.102423  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:22.102493  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:24.602330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:20.112802  299667 cri.go:89] found id: ""
	I1205 07:49:20.112829  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.112838  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:20.112852  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:20.112912  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:20.145614  299667 cri.go:89] found id: ""
	I1205 07:49:20.145642  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.145650  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:20.145659  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:20.145670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:20.208200  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:20.208233  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:20.222391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:20.222422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:20.285471  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:20.285500  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:20.285513  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:20.311384  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:20.311415  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:22.840933  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:22.854843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:22.854939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:22.881572  299667 cri.go:89] found id: ""
	I1205 07:49:22.881598  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.881608  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:22.881614  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:22.881677  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:22.917647  299667 cri.go:89] found id: ""
	I1205 07:49:22.917677  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.917686  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:22.917692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:22.917750  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:22.943325  299667 cri.go:89] found id: ""
	I1205 07:49:22.943346  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.943355  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:22.943362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:22.943426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:22.967894  299667 cri.go:89] found id: ""
	I1205 07:49:22.967955  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.967979  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:22.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:22.968076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:22.994911  299667 cri.go:89] found id: ""
	I1205 07:49:22.994976  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.994991  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:22.994998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:22.995056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:23.022399  299667 cri.go:89] found id: ""
	I1205 07:49:23.022464  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.022486  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:23.022506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:23.022581  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:23.048262  299667 cri.go:89] found id: ""
	I1205 07:49:23.048283  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.048291  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:23.048297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:23.048355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:23.072655  299667 cri.go:89] found id: ""
	I1205 07:49:23.072684  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.072694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:23.072702  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:23.072720  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:23.132711  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:23.132742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:23.146553  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:23.146576  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:23.218207  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:23.218230  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:23.218243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:23.242426  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:23.242462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:27.102316  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:29.602939  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:25.772926  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:25.783467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:25.783546  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:25.811044  299667 cri.go:89] found id: ""
	I1205 07:49:25.811066  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.811075  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:25.811081  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:25.811139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:25.835534  299667 cri.go:89] found id: ""
	I1205 07:49:25.835558  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.835568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:25.835575  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:25.835637  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:25.866938  299667 cri.go:89] found id: ""
	I1205 07:49:25.866966  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.866974  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:25.866981  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:25.867043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:25.897273  299667 cri.go:89] found id: ""
	I1205 07:49:25.897302  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.897313  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:25.897320  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:25.897380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:25.923461  299667 cri.go:89] found id: ""
	I1205 07:49:25.923489  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.923497  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:25.923504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:25.923590  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:25.946791  299667 cri.go:89] found id: ""
	I1205 07:49:25.946813  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.946822  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:25.946828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:25.946885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:25.971479  299667 cri.go:89] found id: ""
	I1205 07:49:25.971507  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.971515  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:25.971521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:25.971580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:25.994965  299667 cri.go:89] found id: ""
	I1205 07:49:25.994986  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.994994  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:25.995003  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:25.995014  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:26.058667  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:26.058701  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:26.073089  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:26.073119  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:26.150334  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:26.150355  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:26.150367  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:26.182077  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:26.182109  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:28.710700  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:28.722142  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:28.722208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:28.749003  299667 cri.go:89] found id: ""
	I1205 07:49:28.749029  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.749037  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:28.749044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:28.749101  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:28.774112  299667 cri.go:89] found id: ""
	I1205 07:49:28.774141  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.774152  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:28.774158  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:28.774215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:28.797966  299667 cri.go:89] found id: ""
	I1205 07:49:28.797987  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.797996  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:28.798002  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:28.798058  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:28.825668  299667 cri.go:89] found id: ""
	I1205 07:49:28.825694  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.825703  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:28.825709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:28.825788  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:28.856952  299667 cri.go:89] found id: ""
	I1205 07:49:28.856986  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.857001  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:28.857008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:28.857091  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:28.882695  299667 cri.go:89] found id: ""
	I1205 07:49:28.882730  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.882746  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:28.882753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:28.882822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:28.909550  299667 cri.go:89] found id: ""
	I1205 07:49:28.909584  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.909594  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:28.909601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:28.909671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:28.942251  299667 cri.go:89] found id: ""
	I1205 07:49:28.942319  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.942340  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:28.942362  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:28.942387  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:29.005506  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:29.005539  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:29.005554  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:29.030880  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:29.030910  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:29.058353  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:29.058381  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:29.121228  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:29.121304  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:49:32.102320  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:34.103275  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:31.636506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:31.647234  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:31.647305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:31.672508  299667 cri.go:89] found id: ""
	I1205 07:49:31.672530  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.672539  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:31.672545  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:31.672603  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:31.696860  299667 cri.go:89] found id: ""
	I1205 07:49:31.696885  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.696894  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:31.696900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:31.696970  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:31.722649  299667 cri.go:89] found id: ""
	I1205 07:49:31.722676  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.722685  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:31.722692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:31.722770  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:31.748068  299667 cri.go:89] found id: ""
	I1205 07:49:31.748093  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.748101  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:31.748109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:31.748169  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:31.773290  299667 cri.go:89] found id: ""
	I1205 07:49:31.773315  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.773324  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:31.773330  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:31.773393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:31.804425  299667 cri.go:89] found id: ""
	I1205 07:49:31.804445  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.804454  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:31.804461  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:31.804521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:31.829116  299667 cri.go:89] found id: ""
	I1205 07:49:31.829137  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.829146  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:31.829152  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:31.829241  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:31.867330  299667 cri.go:89] found id: ""
	I1205 07:49:31.867406  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.867418  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:31.867427  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:31.867438  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:31.931647  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:31.931680  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:31.945211  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:31.945236  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:32.004694  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:32.004719  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:32.004738  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:32.031538  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:32.031572  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:34.562576  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:34.573366  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:34.573477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:34.599238  299667 cri.go:89] found id: ""
	I1205 07:49:34.599262  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.599272  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:34.599279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:34.599342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:34.624561  299667 cri.go:89] found id: ""
	I1205 07:49:34.624589  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.624598  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:34.624604  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:34.624666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:34.649603  299667 cri.go:89] found id: ""
	I1205 07:49:34.649624  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.649637  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:34.649644  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:34.649707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:34.674019  299667 cri.go:89] found id: ""
	I1205 07:49:34.674043  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.674052  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:34.674058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:34.674121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:34.700890  299667 cri.go:89] found id: ""
	I1205 07:49:34.700912  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.700921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:34.700928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:34.700988  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:34.727454  299667 cri.go:89] found id: ""
	I1205 07:49:34.727482  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.727491  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:34.727498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:34.727558  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:34.753086  299667 cri.go:89] found id: ""
	I1205 07:49:34.753107  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.753115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:34.753120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:34.753208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:34.779077  299667 cri.go:89] found id: ""
	I1205 07:49:34.779100  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.779109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:34.779118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:34.779129  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:34.839330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:34.839368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:34.857129  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:34.857175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:34.932420  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:34.932440  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:34.932452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:34.957616  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:34.957649  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:36.602677  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:39.102319  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:37.486529  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:37.496909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:37.496977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:37.521254  299667 cri.go:89] found id: ""
	I1205 07:49:37.521315  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.521349  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:37.521372  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:37.521462  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:37.544759  299667 cri.go:89] found id: ""
	I1205 07:49:37.544782  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.544791  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:37.544798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:37.544854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:37.569519  299667 cri.go:89] found id: ""
	I1205 07:49:37.569549  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.569558  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:37.569564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:37.569624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:37.593917  299667 cri.go:89] found id: ""
	I1205 07:49:37.593938  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.593947  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:37.593954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:37.594014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:37.619915  299667 cri.go:89] found id: ""
	I1205 07:49:37.619940  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.619949  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:37.619955  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:37.620016  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:37.647160  299667 cri.go:89] found id: ""
	I1205 07:49:37.647186  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.647195  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:37.647202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:37.647261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:37.672076  299667 cri.go:89] found id: ""
	I1205 07:49:37.672097  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.672105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:37.672111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:37.672170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:37.697550  299667 cri.go:89] found id: ""
	I1205 07:49:37.697573  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.697581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:37.697590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:37.697601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:37.754073  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:37.754105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:37.769043  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:37.769071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:37.831338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:37.831359  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:37.831371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:37.857528  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:37.857564  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:41.602800  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:44.102845  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:40.404513  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:40.415071  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:40.415143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:40.439261  299667 cri.go:89] found id: ""
	I1205 07:49:40.439283  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.439291  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:40.439298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:40.439355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:40.464063  299667 cri.go:89] found id: ""
	I1205 07:49:40.464084  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.464092  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:40.464098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:40.464158  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:40.490322  299667 cri.go:89] found id: ""
	I1205 07:49:40.490344  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.490352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:40.490359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:40.490419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:40.517055  299667 cri.go:89] found id: ""
	I1205 07:49:40.517078  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.517087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:40.517093  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:40.517151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:40.545250  299667 cri.go:89] found id: ""
	I1205 07:49:40.545273  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.545282  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:40.545288  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:40.545348  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:40.569118  299667 cri.go:89] found id: ""
	I1205 07:49:40.569142  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.569151  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:40.569188  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:40.569248  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:40.593152  299667 cri.go:89] found id: ""
	I1205 07:49:40.593209  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.593217  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:40.593223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:40.593287  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:40.617285  299667 cri.go:89] found id: ""
	I1205 07:49:40.617308  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.617316  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:40.617325  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:40.617336  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:40.681518  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:40.681540  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:40.681553  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:40.707309  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:40.707347  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:40.740118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:40.740145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:40.798971  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:40.799001  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.313313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:43.324257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:43.324337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:43.356730  299667 cri.go:89] found id: ""
	I1205 07:49:43.356755  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.356763  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:43.356770  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:43.356828  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:43.386071  299667 cri.go:89] found id: ""
	I1205 07:49:43.386097  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.386106  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:43.386112  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:43.386172  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:43.415579  299667 cri.go:89] found id: ""
	I1205 07:49:43.415606  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.415615  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:43.415621  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:43.415679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:43.441039  299667 cri.go:89] found id: ""
	I1205 07:49:43.441064  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.441075  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:43.441082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:43.441141  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:43.466399  299667 cri.go:89] found id: ""
	I1205 07:49:43.466432  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.466442  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:43.466449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:43.466519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:43.497264  299667 cri.go:89] found id: ""
	I1205 07:49:43.497309  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.497319  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:43.497326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:43.497397  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:43.522221  299667 cri.go:89] found id: ""
	I1205 07:49:43.522247  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.522256  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:43.522262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:43.522325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:43.546887  299667 cri.go:89] found id: ""
	I1205 07:49:43.546953  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.546969  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:43.546980  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:43.546992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:43.613596  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:43.613644  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.628794  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:43.628825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:43.698835  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:43.698854  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:43.698866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:43.725776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:43.725811  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:46.103222  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:48.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:46.256365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:46.267583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:46.267659  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:46.296652  299667 cri.go:89] found id: ""
	I1205 07:49:46.296679  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.296687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:46.296694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:46.296760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:46.323489  299667 cri.go:89] found id: ""
	I1205 07:49:46.323514  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.323522  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:46.323529  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:46.323593  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:46.355225  299667 cri.go:89] found id: ""
	I1205 07:49:46.355249  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.355258  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:46.355265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:46.355340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:46.383644  299667 cri.go:89] found id: ""
	I1205 07:49:46.383678  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.383687  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:46.383694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:46.383768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:46.421484  299667 cri.go:89] found id: ""
	I1205 07:49:46.421518  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.421527  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:46.421533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:46.421602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:46.447032  299667 cri.go:89] found id: ""
	I1205 07:49:46.447057  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.447066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:46.447073  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:46.447136  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:46.472839  299667 cri.go:89] found id: ""
	I1205 07:49:46.472860  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.472867  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:46.472873  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:46.472930  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:46.501395  299667 cri.go:89] found id: ""
	I1205 07:49:46.501422  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.501432  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:46.501441  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:46.501452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:46.558146  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:46.558178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:46.573118  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:46.573146  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:46.637720  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:46.637741  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:46.637754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:46.662623  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:46.662658  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.193341  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:49.204485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:49.204616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:49.235316  299667 cri.go:89] found id: ""
	I1205 07:49:49.235380  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.235403  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:49.235424  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:49.235503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:49.259781  299667 cri.go:89] found id: ""
	I1205 07:49:49.259811  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.259820  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:49.259826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:49.259894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:49.283985  299667 cri.go:89] found id: ""
	I1205 07:49:49.284025  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.284034  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:49.284041  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:49.284123  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:49.312614  299667 cri.go:89] found id: ""
	I1205 07:49:49.312643  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.312652  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:49.312659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:49.312728  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:49.338339  299667 cri.go:89] found id: ""
	I1205 07:49:49.338362  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.338371  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:49.338378  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:49.338444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:49.367532  299667 cri.go:89] found id: ""
	I1205 07:49:49.367557  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.367565  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:49.367572  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:49.367635  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:49.401925  299667 cri.go:89] found id: ""
	I1205 07:49:49.402000  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.402020  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:49.402038  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:49.402122  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:49.428942  299667 cri.go:89] found id: ""
	I1205 07:49:49.428975  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.428993  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:49.429003  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:49.429021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:49.492403  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:49.492426  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:49.492439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:49.517991  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:49.518021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.545729  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:49.545754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:49.601110  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:49.601140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:49:51.102462  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:53.103333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:52.115102  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:52.128449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:52.128522  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:52.158550  299667 cri.go:89] found id: ""
	I1205 07:49:52.158575  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.158584  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:52.158591  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:52.158654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:52.183729  299667 cri.go:89] found id: ""
	I1205 07:49:52.183750  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.183759  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:52.183765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:52.183829  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:52.209241  299667 cri.go:89] found id: ""
	I1205 07:49:52.209269  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.209279  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:52.209286  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:52.209367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:52.234457  299667 cri.go:89] found id: ""
	I1205 07:49:52.234488  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.234497  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:52.234504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:52.234568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:52.258774  299667 cri.go:89] found id: ""
	I1205 07:49:52.258799  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.258808  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:52.258815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:52.258904  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:52.284285  299667 cri.go:89] found id: ""
	I1205 07:49:52.284319  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.284329  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:52.284336  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:52.284406  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:52.311443  299667 cri.go:89] found id: ""
	I1205 07:49:52.311470  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.311479  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:52.311485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:52.311577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:52.335827  299667 cri.go:89] found id: ""
	I1205 07:49:52.335859  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.335868  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:52.335879  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:52.335890  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:52.395851  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:52.395889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:52.410419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:52.410446  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:52.478966  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:52.478997  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:52.479010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:52.504082  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:52.504114  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.031406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:55.042458  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:55.042534  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:55.066642  299667 cri.go:89] found id: ""
	I1205 07:49:55.066667  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.066677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:55.066684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:55.066746  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:49:55.602712  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:58.102265  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:55.091150  299667 cri.go:89] found id: ""
	I1205 07:49:55.091180  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.091189  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:55.091195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:55.091255  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:55.121930  299667 cri.go:89] found id: ""
	I1205 07:49:55.121951  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.121960  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:55.121965  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:55.122023  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:55.149981  299667 cri.go:89] found id: ""
	I1205 07:49:55.150058  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.150079  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:55.150097  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:55.150184  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:55.173681  299667 cri.go:89] found id: ""
	I1205 07:49:55.173704  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.173712  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:55.173718  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:55.173777  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:55.197308  299667 cri.go:89] found id: ""
	I1205 07:49:55.197332  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.197341  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:55.197347  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:55.197403  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:55.223472  299667 cri.go:89] found id: ""
	I1205 07:49:55.223493  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.223502  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:55.223508  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:55.223572  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:55.252432  299667 cri.go:89] found id: ""
	I1205 07:49:55.252457  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.252466  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:55.252474  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:55.252487  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:55.318488  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:55.318520  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:55.318533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:55.343511  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:55.343587  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.386735  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:55.386818  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:55.452457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:55.452497  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:57.966172  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:57.976919  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:57.976991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:58.003394  299667 cri.go:89] found id: ""
	I1205 07:49:58.003420  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.003429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:58.003436  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:58.003505  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:58.040382  299667 cri.go:89] found id: ""
	I1205 07:49:58.040403  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.040411  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:58.040425  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:58.040486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:58.066131  299667 cri.go:89] found id: ""
	I1205 07:49:58.066161  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.066170  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:58.066177  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:58.066236  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:58.092126  299667 cri.go:89] found id: ""
	I1205 07:49:58.092149  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.092157  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:58.092164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:58.092224  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:58.123111  299667 cri.go:89] found id: ""
	I1205 07:49:58.123138  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.123147  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:58.123154  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:58.123215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:58.155898  299667 cri.go:89] found id: ""
	I1205 07:49:58.155920  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.155929  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:58.155936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:58.156002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:58.181658  299667 cri.go:89] found id: ""
	I1205 07:49:58.181684  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.181694  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:58.181700  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:58.181760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:58.211071  299667 cri.go:89] found id: ""
	I1205 07:49:58.211093  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.211102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:58.211111  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:58.211122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:58.271505  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:58.271551  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:58.287071  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:58.287097  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:58.357627  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:58.357680  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:58.357694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:58.388703  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:58.388747  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:00.103169  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:02.602855  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:04.603343  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:00.928058  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:00.939115  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:00.939186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:00.967955  299667 cri.go:89] found id: ""
	I1205 07:50:00.967979  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.967989  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:00.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:00.968054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:00.994981  299667 cri.go:89] found id: ""
	I1205 07:50:00.995006  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.995014  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:00.995022  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:00.995081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:01.020388  299667 cri.go:89] found id: ""
	I1205 07:50:01.020412  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.020421  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:01.020427  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:01.020487  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:01.045771  299667 cri.go:89] found id: ""
	I1205 07:50:01.045796  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.045816  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:01.045839  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:01.045915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:01.072970  299667 cri.go:89] found id: ""
	I1205 07:50:01.072995  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.073004  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:01.073009  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:01.073069  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:01.110343  299667 cri.go:89] found id: ""
	I1205 07:50:01.110365  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.110374  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:01.110382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:01.110442  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:01.143588  299667 cri.go:89] found id: ""
	I1205 07:50:01.143627  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.143669  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:01.143676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:01.143734  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:01.173718  299667 cri.go:89] found id: ""
	I1205 07:50:01.173744  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.173753  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:01.173762  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:01.173775  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:01.240437  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:01.240461  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:01.240475  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:01.265849  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:01.265884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:01.295649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:01.295676  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:01.352457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:01.352493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:03.872935  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:03.884137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:03.884213  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:03.909107  299667 cri.go:89] found id: ""
	I1205 07:50:03.909129  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.909138  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:03.909144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:03.909231  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:03.935188  299667 cri.go:89] found id: ""
	I1205 07:50:03.935217  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.935229  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:03.935235  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:03.935293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:03.960991  299667 cri.go:89] found id: ""
	I1205 07:50:03.961013  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.961023  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:03.961029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:03.961087  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:03.993563  299667 cri.go:89] found id: ""
	I1205 07:50:03.993586  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.993595  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:03.993602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:03.993658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:04.022615  299667 cri.go:89] found id: ""
	I1205 07:50:04.022640  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.022650  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:04.022656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:04.022744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:04.052044  299667 cri.go:89] found id: ""
	I1205 07:50:04.052067  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.052076  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:04.052083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:04.052155  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:04.077688  299667 cri.go:89] found id: ""
	I1205 07:50:04.077766  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.077790  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:04.077798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:04.077873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:04.108745  299667 cri.go:89] found id: ""
	I1205 07:50:04.108772  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.108781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:04.108790  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:04.108806  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:04.124370  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:04.124398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:04.202708  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:04.202730  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:04.202742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:04.228486  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:04.228522  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:04.257187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:04.257214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:07.102231  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:09.102419  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:06.817489  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:06.828313  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:06.828385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:06.852373  299667 cri.go:89] found id: ""
	I1205 07:50:06.852445  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.852468  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:06.852489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:06.852557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:06.877263  299667 cri.go:89] found id: ""
	I1205 07:50:06.877291  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.877300  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:06.877306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:06.877373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:06.902856  299667 cri.go:89] found id: ""
	I1205 07:50:06.902882  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.902892  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:06.902898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:06.902962  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:06.928569  299667 cri.go:89] found id: ""
	I1205 07:50:06.928595  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.928604  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:06.928611  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:06.928689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:06.953448  299667 cri.go:89] found id: ""
	I1205 07:50:06.953481  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.953491  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:06.953498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:06.953567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:06.978486  299667 cri.go:89] found id: ""
	I1205 07:50:06.978557  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.978579  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:06.978592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:06.978653  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:07.004116  299667 cri.go:89] found id: ""
	I1205 07:50:07.004201  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.004245  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:07.004278  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:07.004369  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:07.030912  299667 cri.go:89] found id: ""
	I1205 07:50:07.030946  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.030956  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:07.030966  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:07.030995  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:07.087669  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:07.087703  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:07.102364  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:07.102424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:07.175733  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:07.175756  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:07.175768  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:07.201087  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:07.201120  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.733660  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:09.744254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:09.744322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:09.768703  299667 cri.go:89] found id: ""
	I1205 07:50:09.768725  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.768733  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:09.768740  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:09.768803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:09.792862  299667 cri.go:89] found id: ""
	I1205 07:50:09.792884  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.792892  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:09.792898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:09.792953  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:09.816998  299667 cri.go:89] found id: ""
	I1205 07:50:09.817020  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.817028  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:09.817042  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:09.817098  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:09.846103  299667 cri.go:89] found id: ""
	I1205 07:50:09.846128  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.846137  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:09.846144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:09.846215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:09.869920  299667 cri.go:89] found id: ""
	I1205 07:50:09.869943  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.869952  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:09.869958  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:09.870017  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:09.894186  299667 cri.go:89] found id: ""
	I1205 07:50:09.894207  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.894216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:09.894222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:09.894279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:09.918290  299667 cri.go:89] found id: ""
	I1205 07:50:09.918323  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.918332  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:09.918338  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:09.918404  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:09.942213  299667 cri.go:89] found id: ""
	I1205 07:50:09.942241  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.942250  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:09.942260  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:09.942300  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.971801  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:09.971827  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:10.027693  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:10.027732  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:10.042067  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:10.042095  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:11.102920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:13.602347  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:10.106137  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:10.106162  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:10.106175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.633673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:12.645469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:12.645547  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:12.676971  299667 cri.go:89] found id: ""
	I1205 07:50:12.676997  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.677007  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:12.677014  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:12.677084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:12.702338  299667 cri.go:89] found id: ""
	I1205 07:50:12.702361  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.702370  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:12.702377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:12.702436  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:12.726932  299667 cri.go:89] found id: ""
	I1205 07:50:12.726958  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.726968  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:12.726974  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:12.727054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:12.752194  299667 cri.go:89] found id: ""
	I1205 07:50:12.752231  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.752240  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:12.752246  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:12.752354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:12.777805  299667 cri.go:89] found id: ""
	I1205 07:50:12.777874  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.777897  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:12.777917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:12.777990  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:12.802215  299667 cri.go:89] found id: ""
	I1205 07:50:12.802240  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.802250  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:12.802257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:12.802334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:12.831796  299667 cri.go:89] found id: ""
	I1205 07:50:12.831821  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.831830  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:12.831836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:12.831899  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:12.856886  299667 cri.go:89] found id: ""
	I1205 07:50:12.856912  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.856921  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:12.856930  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:12.856941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:12.870323  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:12.870352  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:12.933303  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:12.933325  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:12.933339  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.958156  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:12.958191  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:12.986132  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:12.986158  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:15.602727  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:17.602807  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:15.543265  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:15.553756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:15.553824  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:15.579618  299667 cri.go:89] found id: ""
	I1205 07:50:15.579641  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.579650  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:15.579656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:15.579719  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:15.615622  299667 cri.go:89] found id: ""
	I1205 07:50:15.615646  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.615654  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:15.615660  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:15.615718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:15.648566  299667 cri.go:89] found id: ""
	I1205 07:50:15.648595  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.648604  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:15.648610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:15.648669  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:15.678106  299667 cri.go:89] found id: ""
	I1205 07:50:15.678132  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.678141  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:15.678147  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:15.678210  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:15.703125  299667 cri.go:89] found id: ""
	I1205 07:50:15.703148  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.703157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:15.703163  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:15.703229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:15.727847  299667 cri.go:89] found id: ""
	I1205 07:50:15.727873  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.727882  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:15.727889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:15.727948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:15.755105  299667 cri.go:89] found id: ""
	I1205 07:50:15.755129  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.755138  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:15.755144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:15.755203  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:15.780309  299667 cri.go:89] found id: ""
	I1205 07:50:15.780334  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.780343  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:15.780351  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:15.780362  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:15.836755  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:15.836788  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:15.850164  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:15.850241  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:15.913792  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:15.913812  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:15.913828  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:15.938310  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:15.938344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.465299  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:18.475870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:18.475939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:18.501780  299667 cri.go:89] found id: ""
	I1205 07:50:18.501806  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.501821  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:18.501828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:18.501886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:18.526890  299667 cri.go:89] found id: ""
	I1205 07:50:18.526920  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.526929  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:18.526936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:18.526996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:18.552506  299667 cri.go:89] found id: ""
	I1205 07:50:18.552531  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.552540  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:18.552546  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:18.552605  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:18.577492  299667 cri.go:89] found id: ""
	I1205 07:50:18.577517  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.577526  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:18.577533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:18.577591  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:18.609705  299667 cri.go:89] found id: ""
	I1205 07:50:18.609731  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.609740  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:18.609746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:18.609804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:18.637216  299667 cri.go:89] found id: ""
	I1205 07:50:18.637242  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.637251  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:18.637258  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:18.637315  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:18.663025  299667 cri.go:89] found id: ""
	I1205 07:50:18.663051  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.663060  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:18.663067  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:18.663145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:18.689022  299667 cri.go:89] found id: ""
	I1205 07:50:18.689086  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.689109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:18.689131  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:18.689192  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:18.703250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:18.703279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:18.768192  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:18.768211  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:18.768223  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:18.793554  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:18.793585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.828893  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:18.828920  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:20.102540  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:22.602506  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:24.602962  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:21.385309  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:21.397376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:21.397451  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:21.424618  299667 cri.go:89] found id: ""
	I1205 07:50:21.424642  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.424652  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:21.424659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:21.424717  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:21.451181  299667 cri.go:89] found id: ""
	I1205 07:50:21.451202  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.451211  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:21.451217  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:21.451275  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:21.475206  299667 cri.go:89] found id: ""
	I1205 07:50:21.475228  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.475237  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:21.475243  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:21.475300  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:21.505637  299667 cri.go:89] found id: ""
	I1205 07:50:21.505663  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.505672  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:21.505679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:21.505738  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:21.534466  299667 cri.go:89] found id: ""
	I1205 07:50:21.534541  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.534557  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:21.534579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:21.534644  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:21.560428  299667 cri.go:89] found id: ""
	I1205 07:50:21.560453  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.560462  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:21.560472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:21.560530  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:21.584825  299667 cri.go:89] found id: ""
	I1205 07:50:21.584852  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.584860  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:21.584867  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:21.584934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:21.623066  299667 cri.go:89] found id: ""
	I1205 07:50:21.623093  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.623102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:21.623112  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:21.623127  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:21.687398  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:21.687435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:21.702122  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:21.702149  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:21.767031  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:21.767050  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:21.767063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:21.791862  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:21.791895  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.321349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:24.331708  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:24.331778  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:24.369231  299667 cri.go:89] found id: ""
	I1205 07:50:24.369255  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.369264  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:24.369270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:24.369345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:24.397058  299667 cri.go:89] found id: ""
	I1205 07:50:24.397078  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.397088  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:24.397094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:24.397152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:24.425233  299667 cri.go:89] found id: ""
	I1205 07:50:24.425256  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.425264  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:24.425271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:24.425325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:24.451011  299667 cri.go:89] found id: ""
	I1205 07:50:24.451032  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.451041  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:24.451047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:24.451103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:24.475249  299667 cri.go:89] found id: ""
	I1205 07:50:24.475278  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.475287  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:24.475294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:24.475352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:24.500860  299667 cri.go:89] found id: ""
	I1205 07:50:24.500885  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.500895  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:24.500911  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:24.500969  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:24.525728  299667 cri.go:89] found id: ""
	I1205 07:50:24.525751  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.525771  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:24.525778  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:24.525839  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:24.549854  299667 cri.go:89] found id: ""
	I1205 07:50:24.549877  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.549885  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:24.549894  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:24.549923  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:24.574340  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:24.574371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.609821  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:24.609850  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:24.668879  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:24.668917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:24.683025  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:24.683052  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:24.745503  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:27.102442  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:29.102897  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:27.247317  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:27.258551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:27.258627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:27.282556  299667 cri.go:89] found id: ""
	I1205 07:50:27.282584  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.282594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:27.282601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:27.282685  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:27.311566  299667 cri.go:89] found id: ""
	I1205 07:50:27.311593  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.311602  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:27.311608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:27.311666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:27.336201  299667 cri.go:89] found id: ""
	I1205 07:50:27.336226  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.336235  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:27.336241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:27.336295  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:27.374655  299667 cri.go:89] found id: ""
	I1205 07:50:27.374733  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.374756  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:27.374804  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:27.374881  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:27.403358  299667 cri.go:89] found id: ""
	I1205 07:50:27.403381  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.403390  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:27.403396  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:27.403453  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:27.434322  299667 cri.go:89] found id: ""
	I1205 07:50:27.434347  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.434355  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:27.434362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:27.434430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:27.458621  299667 cri.go:89] found id: ""
	I1205 07:50:27.458643  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.458651  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:27.458669  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:27.458726  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:27.487490  299667 cri.go:89] found id: ""
	I1205 07:50:27.487514  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.487524  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:27.487532  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:27.487543  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:27.515434  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:27.515462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:27.574832  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:27.574864  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:27.588186  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:27.588210  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:27.666339  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:27.666400  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:27.666420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:50:31.602443  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:34.102266  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:30.192057  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:30.203579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:30.203657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:30.233613  299667 cri.go:89] found id: ""
	I1205 07:50:30.233663  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.233673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:30.233680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:30.233739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:30.262491  299667 cri.go:89] found id: ""
	I1205 07:50:30.262517  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.262526  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:30.262532  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:30.262599  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:30.292006  299667 cri.go:89] found id: ""
	I1205 07:50:30.292031  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.292042  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:30.292078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:30.292134  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:30.317938  299667 cri.go:89] found id: ""
	I1205 07:50:30.317963  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.317972  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:30.317979  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:30.318037  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:30.359844  299667 cri.go:89] found id: ""
	I1205 07:50:30.359871  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.359880  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:30.359887  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:30.359946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:30.391160  299667 cri.go:89] found id: ""
	I1205 07:50:30.391187  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.391196  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:30.391202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:30.391256  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:30.424091  299667 cri.go:89] found id: ""
	I1205 07:50:30.424116  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.424124  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:30.424131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:30.424186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:30.449137  299667 cri.go:89] found id: ""
	I1205 07:50:30.449184  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.449193  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:30.449204  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:30.449216  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:30.477964  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:30.477990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:30.535174  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:30.535208  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:30.548511  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:30.548537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:30.611856  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:30.611880  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:30.611892  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.137527  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:33.148376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:33.148457  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:33.173779  299667 cri.go:89] found id: ""
	I1205 07:50:33.173802  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.173810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:33.173816  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:33.173893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:33.198637  299667 cri.go:89] found id: ""
	I1205 07:50:33.198661  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.198671  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:33.198678  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:33.198739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:33.227950  299667 cri.go:89] found id: ""
	I1205 07:50:33.227972  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.227980  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:33.227986  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:33.228056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:33.252400  299667 cri.go:89] found id: ""
	I1205 07:50:33.252434  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.252446  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:33.252454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:33.252528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:33.277287  299667 cri.go:89] found id: ""
	I1205 07:50:33.277311  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.277320  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:33.277326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:33.277384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:33.303260  299667 cri.go:89] found id: ""
	I1205 07:50:33.303285  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.303294  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:33.303310  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:33.303387  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:33.327837  299667 cri.go:89] found id: ""
	I1205 07:50:33.327860  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.327868  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:33.327875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:33.327934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:33.361138  299667 cri.go:89] found id: ""
	I1205 07:50:33.361196  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.361206  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:33.361216  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:33.361227  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:33.439490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:33.439534  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:33.454134  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:33.454201  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:33.519248  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:33.519324  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:33.519346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.544362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:33.544404  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:36.102706  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:38.602248  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:36.073913  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:36.085180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:36.085254  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:36.111524  299667 cri.go:89] found id: ""
	I1205 07:50:36.111549  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.111558  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:36.111565  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:36.111624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:36.136758  299667 cri.go:89] found id: ""
	I1205 07:50:36.136832  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.136856  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:36.136874  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:36.136999  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:36.170081  299667 cri.go:89] found id: ""
	I1205 07:50:36.170105  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.170113  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:36.170120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:36.170177  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:36.194713  299667 cri.go:89] found id: ""
	I1205 07:50:36.194738  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.194747  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:36.194753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:36.194817  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:36.219168  299667 cri.go:89] found id: ""
	I1205 07:50:36.219190  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.219199  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:36.219205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:36.219272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:36.243582  299667 cri.go:89] found id: ""
	I1205 07:50:36.243653  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.243676  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:36.243694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:36.243775  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:36.268659  299667 cri.go:89] found id: ""
	I1205 07:50:36.268730  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.268754  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:36.268771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:36.268853  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:36.293268  299667 cri.go:89] found id: ""
	I1205 07:50:36.293338  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.293361  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:36.293383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:36.293416  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:36.372932  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:36.372960  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:36.372972  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:36.400267  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:36.400358  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:36.432348  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:36.432371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:36.488499  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:36.488533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.002493  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:39.016301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:39.016371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:39.041723  299667 cri.go:89] found id: ""
	I1205 07:50:39.041799  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.041815  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:39.041823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:39.041885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:39.066151  299667 cri.go:89] found id: ""
	I1205 07:50:39.066174  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.066183  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:39.066189  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:39.066266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:39.090650  299667 cri.go:89] found id: ""
	I1205 07:50:39.090673  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.090682  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:39.090688  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:39.090745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:39.119700  299667 cri.go:89] found id: ""
	I1205 07:50:39.119732  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.119740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:39.119747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:39.119810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:39.144307  299667 cri.go:89] found id: ""
	I1205 07:50:39.144369  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.144389  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:39.144406  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:39.144488  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:39.171025  299667 cri.go:89] found id: ""
	I1205 07:50:39.171048  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.171057  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:39.171063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:39.171127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:39.195100  299667 cri.go:89] found id: ""
	I1205 07:50:39.195121  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.195130  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:39.195136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:39.195197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:39.218959  299667 cri.go:89] found id: ""
	I1205 07:50:39.218980  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.218991  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:39.219000  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:39.219010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:39.243315  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:39.243346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:39.270633  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:39.270709  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:39.330141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:39.330172  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.345855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:39.345883  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:39.426940  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:40.603240  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:43.103156  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:41.928763  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:41.939293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:41.939415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:41.964816  299667 cri.go:89] found id: ""
	I1205 07:50:41.964850  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.964859  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:41.964865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:41.964931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:41.990880  299667 cri.go:89] found id: ""
	I1205 07:50:41.990914  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.990923  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:41.990929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:41.990996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:42.022456  299667 cri.go:89] found id: ""
	I1205 07:50:42.022483  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.022494  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:42.022501  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:42.022570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:42.049261  299667 cri.go:89] found id: ""
	I1205 07:50:42.049328  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.049352  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:42.049369  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:42.049446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:42.077034  299667 cri.go:89] found id: ""
	I1205 07:50:42.077108  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.077134  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:42.077255  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:42.077338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:42.114881  299667 cri.go:89] found id: ""
	I1205 07:50:42.114910  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.114921  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:42.114928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:42.114994  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:42.151897  299667 cri.go:89] found id: ""
	I1205 07:50:42.151926  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.151936  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:42.151944  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:42.152012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:42.185532  299667 cri.go:89] found id: ""
	I1205 07:50:42.185556  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.185565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:42.185574  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:42.185585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:42.246490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:42.246537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:42.262324  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:42.262359  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:42.331135  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:42.331201  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:42.331219  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:42.358803  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:42.358836  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:44.909321  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:44.920001  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:44.920070  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:44.945367  299667 cri.go:89] found id: ""
	I1205 07:50:44.945392  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.945401  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:44.945407  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:44.945463  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:44.970751  299667 cri.go:89] found id: ""
	I1205 07:50:44.970779  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.970788  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:44.970794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:44.970873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:44.999654  299667 cri.go:89] found id: ""
	I1205 07:50:44.999678  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.999688  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:44.999694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:44.999760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:45.065387  299667 cri.go:89] found id: ""
	I1205 07:50:45.065496  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.065521  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:45.065554  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:45.065661  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1205 07:50:45.105072  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:47.602920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:45.101338  299667 cri.go:89] found id: ""
	I1205 07:50:45.101365  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.101375  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:45.101386  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:45.101459  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:45.140148  299667 cri.go:89] found id: ""
	I1205 07:50:45.140181  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.140192  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:45.140200  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:45.140301  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:45.178981  299667 cri.go:89] found id: ""
	I1205 07:50:45.179025  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.179035  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:45.179043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:45.179176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:45.219922  299667 cri.go:89] found id: ""
	I1205 07:50:45.219949  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.219958  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:45.219969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:45.219989  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:45.291787  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:45.291824  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:45.306539  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:45.306565  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:45.383110  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:45.383171  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:45.383206  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:45.410722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:45.410808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:47.941304  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:47.952011  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:47.952084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:47.978179  299667 cri.go:89] found id: ""
	I1205 07:50:47.978201  299667 logs.go:282] 0 containers: []
	W1205 07:50:47.978210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:47.978216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:47.978274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:48.005927  299667 cri.go:89] found id: ""
	I1205 07:50:48.005954  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.005964  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:48.005971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:48.006042  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:48.040049  299667 cri.go:89] found id: ""
	I1205 07:50:48.040133  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.040156  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:48.040175  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:48.040269  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:48.066524  299667 cri.go:89] found id: ""
	I1205 07:50:48.066549  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.066558  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:48.066564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:48.066627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:48.096997  299667 cri.go:89] found id: ""
	I1205 07:50:48.097026  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.097036  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:48.097043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:48.097103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:48.123968  299667 cri.go:89] found id: ""
	I1205 07:50:48.123990  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.123999  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:48.124005  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:48.124066  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:48.151529  299667 cri.go:89] found id: ""
	I1205 07:50:48.151554  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.151564  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:48.151570  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:48.151629  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:48.181245  299667 cri.go:89] found id: ""
	I1205 07:50:48.181270  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.181279  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:48.181297  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:48.181308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:48.240786  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:48.240832  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:48.255504  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:48.255533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:48.325828  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:48.325849  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:48.325862  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:48.350818  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:48.350898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:50.103331  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:52.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:50.887376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:50.898712  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:50.898787  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:50.926387  299667 cri.go:89] found id: ""
	I1205 07:50:50.926412  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.926421  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:50.926428  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:50.926499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:50.951318  299667 cri.go:89] found id: ""
	I1205 07:50:50.951341  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.951349  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:50.951356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:50.951431  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:50.978509  299667 cri.go:89] found id: ""
	I1205 07:50:50.978536  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.978545  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:50.978551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:50.978614  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:51.017851  299667 cri.go:89] found id: ""
	I1205 07:50:51.017875  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.017884  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:51.017894  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:51.017957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:51.048705  299667 cri.go:89] found id: ""
	I1205 07:50:51.048772  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.048797  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:51.048815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:51.048901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:51.078364  299667 cri.go:89] found id: ""
	I1205 07:50:51.078427  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.078448  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:51.078468  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:51.078560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:51.110914  299667 cri.go:89] found id: ""
	I1205 07:50:51.110955  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.110965  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:51.110970  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:51.111064  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:51.136737  299667 cri.go:89] found id: ""
	I1205 07:50:51.136762  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.136771  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:51.136781  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:51.136793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:51.197928  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:51.197949  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:51.197961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:51.222938  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:51.222968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:51.253887  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:51.253914  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:51.309729  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:51.309759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:53.824280  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:53.834821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:53.834895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:53.882567  299667 cri.go:89] found id: ""
	I1205 07:50:53.882607  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.882617  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:53.882623  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:53.882708  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:53.924413  299667 cri.go:89] found id: ""
	I1205 07:50:53.924439  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.924447  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:53.924454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:53.924521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:53.949296  299667 cri.go:89] found id: ""
	I1205 07:50:53.949329  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.949339  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:53.949345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:53.949421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:53.973974  299667 cri.go:89] found id: ""
	I1205 07:50:53.974036  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.974050  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:53.974058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:53.974114  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:53.999073  299667 cri.go:89] found id: ""
	I1205 07:50:53.999139  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.999154  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:53.999162  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:53.999221  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:54.026401  299667 cri.go:89] found id: ""
	I1205 07:50:54.026425  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.026434  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:54.026441  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:54.026523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:54.056156  299667 cri.go:89] found id: ""
	I1205 07:50:54.056181  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.056191  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:54.056197  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:54.056266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:54.080916  299667 cri.go:89] found id: ""
	I1205 07:50:54.080955  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.080964  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:54.080973  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:54.080985  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:54.105836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:54.105870  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:54.134673  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:54.134702  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:54.191141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:54.191175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:54.204290  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:54.204332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:54.267087  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:55.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:57.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:59.602402  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:56.768821  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:56.779222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:56.779288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:56.807155  299667 cri.go:89] found id: ""
	I1205 07:50:56.807179  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.807188  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:56.807195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:56.807280  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:56.831710  299667 cri.go:89] found id: ""
	I1205 07:50:56.831737  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.831746  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:56.831753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:56.831812  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:56.867145  299667 cri.go:89] found id: ""
	I1205 07:50:56.867169  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.867178  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:56.867185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:56.867243  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:56.893127  299667 cri.go:89] found id: ""
	I1205 07:50:56.893152  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.893174  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:56.893180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:56.893237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:56.922421  299667 cri.go:89] found id: ""
	I1205 07:50:56.922450  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.922460  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:56.922466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:56.922543  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:56.945778  299667 cri.go:89] found id: ""
	I1205 07:50:56.945808  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.945817  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:56.945823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:56.945907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:56.974442  299667 cri.go:89] found id: ""
	I1205 07:50:56.974473  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.974482  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:56.974489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:56.974559  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:56.998662  299667 cri.go:89] found id: ""
	I1205 07:50:56.998685  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.998694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:56.998703  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:56.998715  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:57.058833  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:57.058867  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:57.072293  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:57.072322  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:57.139010  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:57.139030  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:57.139042  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:57.163607  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:57.163639  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.693334  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:59.704756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:59.704870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:59.732171  299667 cri.go:89] found id: ""
	I1205 07:50:59.732198  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.732208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:59.732214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:59.732272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:59.757954  299667 cri.go:89] found id: ""
	I1205 07:50:59.757981  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.757990  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:59.757996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:59.758076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:59.787824  299667 cri.go:89] found id: ""
	I1205 07:50:59.787846  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.787855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:59.787862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:59.787977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:59.813474  299667 cri.go:89] found id: ""
	I1205 07:50:59.813497  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.813506  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:59.813512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:59.813580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:59.842057  299667 cri.go:89] found id: ""
	I1205 07:50:59.842079  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.842088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:59.842094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:59.842162  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:59.872569  299667 cri.go:89] found id: ""
	I1205 07:50:59.872593  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.872602  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:59.872608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:59.872671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:59.905410  299667 cri.go:89] found id: ""
	I1205 07:50:59.905435  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.905443  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:59.905450  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:59.905514  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:59.932703  299667 cri.go:89] found id: ""
	I1205 07:50:59.932744  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.932754  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:59.932763  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:59.932774  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.964043  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:59.964069  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:00.020877  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:00.023486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:00.055130  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:00.055166  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:02.102411  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:04.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:00.182237  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:00.182280  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:00.182298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:02.739834  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:02.750886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:02.750958  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:02.776293  299667 cri.go:89] found id: ""
	I1205 07:51:02.776319  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.776328  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:02.776334  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:02.776393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:02.803043  299667 cri.go:89] found id: ""
	I1205 07:51:02.803080  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.803089  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:02.803096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:02.803176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:02.827935  299667 cri.go:89] found id: ""
	I1205 07:51:02.827957  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.827966  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:02.827972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:02.828031  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:02.859181  299667 cri.go:89] found id: ""
	I1205 07:51:02.859204  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.859215  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:02.859222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:02.859282  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:02.893626  299667 cri.go:89] found id: ""
	I1205 07:51:02.893668  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.893678  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:02.893685  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:02.893755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:02.924778  299667 cri.go:89] found id: ""
	I1205 07:51:02.924808  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.924818  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:02.924830  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:02.924890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:02.950184  299667 cri.go:89] found id: ""
	I1205 07:51:02.950211  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.950220  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:02.950229  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:02.950288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:02.976829  299667 cri.go:89] found id: ""
	I1205 07:51:02.976855  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.976865  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:02.976874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:02.976885  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:03.015998  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:03.016071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:03.072438  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:03.072473  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:03.087250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:03.087283  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:03.153281  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:03.153306  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:03.153319  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:51:07.103249  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:09.602341  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:05.678289  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:05.688964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:05.689032  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:05.714382  299667 cri.go:89] found id: ""
	I1205 07:51:05.714403  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.714412  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:05.714419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:05.714486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:05.743946  299667 cri.go:89] found id: ""
	I1205 07:51:05.743968  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.743976  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:05.743983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:05.744043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:05.768270  299667 cri.go:89] found id: ""
	I1205 07:51:05.768293  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.768303  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:05.768309  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:05.768367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:05.795557  299667 cri.go:89] found id: ""
	I1205 07:51:05.795580  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.795588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:05.795595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:05.795652  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:05.820607  299667 cri.go:89] found id: ""
	I1205 07:51:05.820634  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.820643  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:05.820649  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:05.820707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:05.853624  299667 cri.go:89] found id: ""
	I1205 07:51:05.853648  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.853657  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:05.853670  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:05.853752  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:05.885144  299667 cri.go:89] found id: ""
	I1205 07:51:05.885200  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.885213  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:05.885219  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:05.885296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:05.917755  299667 cri.go:89] found id: ""
	I1205 07:51:05.917777  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.917785  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:05.917794  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:05.917808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:05.978242  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:05.978286  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:05.992931  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:05.992961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:06.070949  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:06.070979  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:06.070992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:06.096749  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:06.096780  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.634532  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:08.646959  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:08.647038  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:08.678851  299667 cri.go:89] found id: ""
	I1205 07:51:08.678875  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.678884  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:08.678890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:08.678954  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:08.702970  299667 cri.go:89] found id: ""
	I1205 07:51:08.702992  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.703001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:08.703006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:08.703063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:08.727238  299667 cri.go:89] found id: ""
	I1205 07:51:08.727259  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.727267  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:08.727273  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:08.727329  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:08.752084  299667 cri.go:89] found id: ""
	I1205 07:51:08.752106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.752114  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:08.752120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:08.752183  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:08.775775  299667 cri.go:89] found id: ""
	I1205 07:51:08.775797  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.775805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:08.775811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:08.775878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:08.800101  299667 cri.go:89] found id: ""
	I1205 07:51:08.800122  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.800130  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:08.800136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:08.800193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:08.826081  299667 cri.go:89] found id: ""
	I1205 07:51:08.826106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.826115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:08.826121  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:08.826179  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:08.850937  299667 cri.go:89] found id: ""
	I1205 07:51:08.850969  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.850979  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:08.850987  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:08.851004  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.884057  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:08.884093  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:08.946750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:08.946793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:08.960852  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:08.960880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:09.030565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:09.030587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:09.030601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:51:11.602638  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:12.602298  297527 node_ready.go:38] duration metric: took 6m0.000452624s for node "no-preload-241270" to be "Ready" ...
	I1205 07:51:12.605551  297527 out.go:203] 
	W1205 07:51:12.608371  297527 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 07:51:12.608388  297527 out.go:285] * 
	W1205 07:51:12.610554  297527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:51:12.612665  297527 out.go:203] 
	I1205 07:51:11.556651  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:11.567626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:11.567701  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:11.595760  299667 cri.go:89] found id: ""
	I1205 07:51:11.595786  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.595795  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:11.595802  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:11.595859  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:11.646030  299667 cri.go:89] found id: ""
	I1205 07:51:11.646056  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.646065  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:11.646072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:11.646138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:11.675282  299667 cri.go:89] found id: ""
	I1205 07:51:11.675310  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.675319  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:11.675325  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:11.675385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:11.699688  299667 cri.go:89] found id: ""
	I1205 07:51:11.699712  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.699721  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:11.699727  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:11.699791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:11.723819  299667 cri.go:89] found id: ""
	I1205 07:51:11.723843  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.723852  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:11.723859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:11.723915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:11.751470  299667 cri.go:89] found id: ""
	I1205 07:51:11.751496  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.751505  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:11.751512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:11.751568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:11.775893  299667 cri.go:89] found id: ""
	I1205 07:51:11.775921  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.775929  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:11.775936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:11.775993  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:11.802990  299667 cri.go:89] found id: ""
	I1205 07:51:11.803012  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.803021  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:11.803033  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:11.803044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:11.859684  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:11.859767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:11.876859  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:11.876889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:11.952118  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:11.944168   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.944893   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.946566   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.947157   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.948800   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:11.944168   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.944893   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.946566   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.947157   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.948800   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:11.952191  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:11.952220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:11.976596  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:11.976630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:14.510895  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:14.522084  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:14.522151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:14.554050  299667 cri.go:89] found id: ""
	I1205 07:51:14.554069  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.554078  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:14.554084  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:14.554139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:14.581712  299667 cri.go:89] found id: ""
	I1205 07:51:14.581732  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.581740  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:14.581746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:14.581810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:14.658701  299667 cri.go:89] found id: ""
	I1205 07:51:14.658723  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.658731  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:14.658737  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:14.658803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:14.686921  299667 cri.go:89] found id: ""
	I1205 07:51:14.686940  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.686948  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:14.686954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:14.687024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:14.720928  299667 cri.go:89] found id: ""
	I1205 07:51:14.720949  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.720957  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:14.720972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:14.721046  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:14.758959  299667 cri.go:89] found id: ""
	I1205 07:51:14.758983  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.758992  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:14.758998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:14.759054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:14.810754  299667 cri.go:89] found id: ""
	I1205 07:51:14.810775  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.810888  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:14.810895  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:14.810966  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:14.865350  299667 cri.go:89] found id: ""
	I1205 07:51:14.865369  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.865379  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:14.865387  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:14.865398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:14.920139  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:14.920170  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:14.973197  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:14.973224  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:15.042929  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:15.042968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:15.069350  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:15.069377  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:15.167229  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:15.157061   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.158379   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.159455   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.160498   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.161615   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:15.157061   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.158379   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.159455   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.160498   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.161615   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:17.667454  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:17.677695  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:17.677767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:17.710656  299667 cri.go:89] found id: ""
	I1205 07:51:17.710678  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.710687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:17.710693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:17.710755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:17.738643  299667 cri.go:89] found id: ""
	I1205 07:51:17.738665  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.738674  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:17.738680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:17.738736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:17.762784  299667 cri.go:89] found id: ""
	I1205 07:51:17.762806  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.762815  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:17.762821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:17.762880  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:17.788678  299667 cri.go:89] found id: ""
	I1205 07:51:17.788699  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.788714  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:17.788720  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:17.788776  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:17.818009  299667 cri.go:89] found id: ""
	I1205 07:51:17.818031  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.818040  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:17.818046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:17.818103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:17.850251  299667 cri.go:89] found id: ""
	I1205 07:51:17.850272  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.850288  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:17.850295  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:17.850354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:17.879482  299667 cri.go:89] found id: ""
	I1205 07:51:17.879503  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.879512  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:17.879518  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:17.879579  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:17.916240  299667 cri.go:89] found id: ""
	I1205 07:51:17.916261  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.916270  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:17.916278  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:17.916344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:17.945888  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:17.945915  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:18.004030  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:18.004079  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:18.022346  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:18.022422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:18.096445  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:18.087987   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.088572   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090232   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090775   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.092338   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:18.087987   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.088572   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090232   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090775   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.092338   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:18.096468  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:18.096481  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:20.623691  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:20.635279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:20.635409  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:20.670295  299667 cri.go:89] found id: ""
	I1205 07:51:20.670369  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.670390  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:20.670410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:20.670493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:20.701924  299667 cri.go:89] found id: ""
	I1205 07:51:20.701948  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.701957  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:20.701964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:20.702055  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:20.727557  299667 cri.go:89] found id: ""
	I1205 07:51:20.727599  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.727622  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:20.727638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:20.727714  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:20.753615  299667 cri.go:89] found id: ""
	I1205 07:51:20.753640  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.753648  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:20.753655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:20.753744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:20.778426  299667 cri.go:89] found id: ""
	I1205 07:51:20.778450  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.778459  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:20.778466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:20.778556  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:20.803580  299667 cri.go:89] found id: ""
	I1205 07:51:20.803605  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.803615  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:20.803638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:20.803707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:20.833142  299667 cri.go:89] found id: ""
	I1205 07:51:20.833193  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.833202  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:20.833208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:20.833285  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:20.868368  299667 cri.go:89] found id: ""
	I1205 07:51:20.868443  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.868465  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:20.868486  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:20.868523  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:20.895451  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:20.895524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:20.926652  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:20.926677  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:20.981657  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:20.981692  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:20.995302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:20.995329  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:21.064074  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:21.055838   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.056503   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.058334   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.059023   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.060931   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:21.055838   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.056503   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.058334   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.059023   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.060931   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:23.564875  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:23.575583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:23.575650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:23.616208  299667 cri.go:89] found id: ""
	I1205 07:51:23.616234  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.616243  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:23.616251  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:23.616314  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:23.645044  299667 cri.go:89] found id: ""
	I1205 07:51:23.645068  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.645077  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:23.645083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:23.645148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:23.679840  299667 cri.go:89] found id: ""
	I1205 07:51:23.679861  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.679870  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:23.679876  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:23.679931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:23.704932  299667 cri.go:89] found id: ""
	I1205 07:51:23.704954  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.704962  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:23.704980  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:23.705040  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:23.730380  299667 cri.go:89] found id: ""
	I1205 07:51:23.730403  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.730411  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:23.730418  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:23.730483  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:23.754200  299667 cri.go:89] found id: ""
	I1205 07:51:23.754224  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.754233  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:23.754240  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:23.754318  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:23.778888  299667 cri.go:89] found id: ""
	I1205 07:51:23.778913  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.778921  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:23.778927  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:23.778983  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:23.803021  299667 cri.go:89] found id: ""
	I1205 07:51:23.803045  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.803054  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:23.803063  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:23.803074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:23.859725  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:23.859805  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:23.878639  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:23.878714  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:23.953245  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:23.945764   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.946559   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948198   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948513   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.950053   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:23.945764   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.946559   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948198   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948513   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.950053   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:23.953267  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:23.953280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:23.978428  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:23.978460  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:26.510161  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:26.520589  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:26.520663  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:26.545475  299667 cri.go:89] found id: ""
	I1205 07:51:26.545500  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.545508  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:26.545515  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:26.545570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:26.570378  299667 cri.go:89] found id: ""
	I1205 07:51:26.570401  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.570409  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:26.570416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:26.570476  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:26.596521  299667 cri.go:89] found id: ""
	I1205 07:51:26.596547  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.596556  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:26.596562  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:26.596618  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:26.624228  299667 cri.go:89] found id: ""
	I1205 07:51:26.624255  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.624264  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:26.624280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:26.624336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:26.650763  299667 cri.go:89] found id: ""
	I1205 07:51:26.650797  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.650807  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:26.650813  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:26.650870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:26.681944  299667 cri.go:89] found id: ""
	I1205 07:51:26.681972  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.681980  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:26.681987  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:26.682043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:26.706897  299667 cri.go:89] found id: ""
	I1205 07:51:26.706918  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.706927  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:26.706933  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:26.706991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:26.732536  299667 cri.go:89] found id: ""
	I1205 07:51:26.732560  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.732569  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:26.732578  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:26.732619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:26.789640  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:26.789673  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:26.803060  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:26.803089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:26.884697  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:26.872770   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.877391   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879063   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879460   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.881003   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:26.872770   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.877391   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879063   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879460   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.881003   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:26.884720  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:26.884737  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:26.912821  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:26.912856  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:29.445153  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:29.455673  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:29.455740  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:29.479669  299667 cri.go:89] found id: ""
	I1205 07:51:29.479694  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.479702  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:29.479709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:29.479768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:29.504129  299667 cri.go:89] found id: ""
	I1205 07:51:29.504151  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.504160  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:29.504166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:29.504223  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:29.528037  299667 cri.go:89] found id: ""
	I1205 07:51:29.528061  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.528071  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:29.528077  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:29.528137  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:29.553104  299667 cri.go:89] found id: ""
	I1205 07:51:29.553129  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.553138  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:29.553145  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:29.553252  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:29.582155  299667 cri.go:89] found id: ""
	I1205 07:51:29.582180  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.582189  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:29.582195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:29.582251  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:29.616156  299667 cri.go:89] found id: ""
	I1205 07:51:29.616181  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.616190  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:29.616205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:29.616279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:29.643373  299667 cri.go:89] found id: ""
	I1205 07:51:29.643399  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.643407  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:29.643413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:29.643474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:29.669624  299667 cri.go:89] found id: ""
	I1205 07:51:29.669649  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.669658  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:29.669667  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:29.669678  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:29.725864  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:29.725897  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:29.739284  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:29.739311  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:29.812338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:29.804736   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.805417   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807055   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807553   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.809095   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:29.804736   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.805417   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807055   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807553   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.809095   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:29.812358  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:29.812371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:29.837776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:29.837808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:32.374773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:32.385440  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:32.385519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:32.410264  299667 cri.go:89] found id: ""
	I1205 07:51:32.410285  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.410294  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:32.410301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:32.410380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:32.435693  299667 cri.go:89] found id: ""
	I1205 07:51:32.435716  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.435724  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:32.435730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:32.435789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:32.459782  299667 cri.go:89] found id: ""
	I1205 07:51:32.459854  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.459865  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:32.459872  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:32.460140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:32.490196  299667 cri.go:89] found id: ""
	I1205 07:51:32.490221  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.490230  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:32.490236  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:32.490302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:32.515432  299667 cri.go:89] found id: ""
	I1205 07:51:32.515456  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.515465  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:32.515472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:32.515535  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:32.544631  299667 cri.go:89] found id: ""
	I1205 07:51:32.544657  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.544666  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:32.544672  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:32.544733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:32.568734  299667 cri.go:89] found id: ""
	I1205 07:51:32.568759  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.568768  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:32.568785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:32.568841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:32.593347  299667 cri.go:89] found id: ""
	I1205 07:51:32.593375  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.593385  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:32.593394  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:32.593406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:32.663939  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:32.663975  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:32.678486  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:32.678514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:32.740819  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:32.733560   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.734160   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.735620   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.736048   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.737671   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:32.733560   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.734160   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.735620   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.736048   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.737671   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:32.740842  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:32.740854  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:32.765510  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:32.765539  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:35.296522  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:35.310277  299667 out.go:203] 
	W1205 07:51:35.313261  299667 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 07:51:35.313316  299667 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 07:51:35.313333  299667 out.go:285] * Related issues:
	W1205 07:51:35.313353  299667 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1205 07:51:35.313373  299667 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1205 07:51:35.316371  299667 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209287352Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209303147Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209319738Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209338060Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209354355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209371619Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209407246Z" level=info msg="runtime interface created"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209414106Z" level=info msg="created NRI interface"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209431698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209473470Z" level=info msg="Connect containerd service"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209745990Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.210997942Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227442652Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227515662Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227533837Z" level=info msg="Start subscribing containerd event"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227584988Z" level=info msg="Start recovering state"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248902324Z" level=info msg="Start event monitor"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248944278Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248954567Z" level=info msg="Start streaming server"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248967343Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248975425Z" level=info msg="runtime interface starting up..."
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248982071Z" level=info msg="starting plugins..."
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.249010797Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.249144378Z" level=info msg="containerd successfully booted in 0.058238s"
	Dec 05 07:45:31 newest-cni-622440 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:45.068742   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:45.070059   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:45.070875   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:45.072721   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:45.073449   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:51:45 up  2:34,  0 user,  load average: 1.08, 0.83, 1.30
	Linux newest-cni-622440 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:51:40 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:40 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:41 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:42 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:42 newest-cni-622440 kubelet[13670]: E1205 07:51:42.424783   13670 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:42 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:42 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:43 newest-cni-622440 kubelet[13707]: E1205 07:51:43.157438   13707 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:43 newest-cni-622440 kubelet[13728]: E1205 07:51:43.905072   13728 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:43 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:44 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 05 07:51:44 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:44 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:44 newest-cni-622440 kubelet[13735]: E1205 07:51:44.655331   13735 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:44 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:44 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (391.576443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-622440" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-622440
helpers_test.go:243: (dbg) docker inspect newest-cni-622440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	        "Created": "2025-12-05T07:34:55.965403434Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:45:25.584904359Z",
	            "FinishedAt": "2025-12-05T07:45:24.024543459Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/hosts",
	        "LogPath": "/var/lib/docker/containers/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4/9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4-json.log",
	        "Name": "/newest-cni-622440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-622440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-622440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9420074472d9b45c005b1602724c3c00deb16ad24b339caa18228a43c97b93f4",
	                "LowerDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8f8161934531c8b227e01631dc6806268afdb8d0aa0997f5af641c614df99d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-622440",
	                "Source": "/var/lib/docker/volumes/newest-cni-622440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-622440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-622440",
	                "name.minikube.sigs.k8s.io": "newest-cni-622440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed9530bf43b75054636d02a5c2e26f04f7734993d5bbcca1755d31d58cd478eb",
	            "SandboxKey": "/var/run/docker/netns/ed9530bf43b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-622440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:fd:48:71:b9:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "96c6294e00fc4b96dda84202da479b822dd69419748060a344f1800d21559cfe",
	                    "EndpointID": "58c3f199e7d48a7db52c99942eb204475e9d0d215b5c84cb3379d82aa57f00e6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-622440",
	                        "9420074472d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (323.891379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-622440 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-622440 logs -n 25: (1.778664715s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ default-k8s-diff-port-083143 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p default-k8s-diff-port-083143 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ image   │ embed-certs-861489 image list --format=json                                                                                                                                                                                                                │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ pause   │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ unpause │ -p embed-certs-861489 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p default-k8s-diff-port-083143                                                                                                                                                                                                                            │ default-k8s-diff-port-083143 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ delete  │ -p disable-driver-mounts-358601                                                                                                                                                                                                                            │ disable-driver-mounts-358601 │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ delete  │ -p embed-certs-861489                                                                                                                                                                                                                                      │ embed-certs-861489           │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-241270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-622440 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │                     │
	│ stop    │ -p no-preload-241270 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p no-preload-241270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p no-preload-241270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-241270            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	│ stop    │ -p newest-cni-622440 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-622440 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p newest-cni-622440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	│ image   │ newest-cni-622440 image list --format=json                                                                                                                                                                                                                 │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:51 UTC │ 05 Dec 25 07:51 UTC │
	│ pause   │ -p newest-cni-622440 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:51 UTC │ 05 Dec 25 07:51 UTC │
	│ unpause │ -p newest-cni-622440 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-622440            │ jenkins │ v1.37.0 │ 05 Dec 25 07:51 UTC │ 05 Dec 25 07:51 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:45:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:45:25.089760  299667 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:25.090022  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090052  299667 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:25.090069  299667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:25.090384  299667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:45:25.090842  299667 out.go:368] Setting JSON to false
	I1205 07:45:25.091806  299667 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8872,"bootTime":1764911853,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:45:25.091916  299667 start.go:143] virtualization:  
	I1205 07:45:25.094988  299667 out.go:179] * [newest-cni-622440] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:25.098817  299667 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:25.098909  299667 notify.go:221] Checking for updates...
	I1205 07:45:25.105041  299667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:25.108085  299667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:25.111075  299667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:45:25.114070  299667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:25.117093  299667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:25.120796  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:25.121387  299667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:25.146702  299667 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:25.146810  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.201970  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.192879595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.202086  299667 docker.go:319] overlay module found
	I1205 07:45:25.205420  299667 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:25.208200  299667 start.go:309] selected driver: docker
	I1205 07:45:25.208216  299667 start.go:927] validating driver "docker" against &{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.208322  299667 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:25.209018  299667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:25.271889  299667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:45:25.262935561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:25.272253  299667 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 07:45:25.272290  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:25.272360  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:25.272408  299667 start.go:353] cluster config:
	{Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.275549  299667 out.go:179] * Starting "newest-cni-622440" primary control-plane node in "newest-cni-622440" cluster
	I1205 07:45:25.278335  299667 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 07:45:25.281398  299667 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:25.284371  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:25.284526  299667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:25.304420  299667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:25.304443  299667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:45:25.350688  299667 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1205 07:45:25.522612  299667 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1205 07:45:25.522872  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.522902  299667 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.522986  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:45:25.522997  299667 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.314µs
	I1205 07:45:25.523010  299667 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:45:25.523020  299667 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523050  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:45:25.523054  299667 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.177µs
	I1205 07:45:25.523060  299667 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523070  299667 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523108  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:45:25.523117  299667 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.906µs
	I1205 07:45:25.523123  299667 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523137  299667 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523144  299667 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:25.523164  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:45:25.523170  299667 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 37.867µs
	I1205 07:45:25.523176  299667 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523180  299667 start.go:360] acquireMachinesLock for newest-cni-622440: {Name:mkeb4e2da132b130a104992b75a60b447ca2eae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523184  299667 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523220  299667 start.go:364] duration metric: took 26.043µs to acquireMachinesLock for "newest-cni-622440"
	I1205 07:45:25.523232  299667 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:25.523223  299667 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523248  299667 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:45:25.523282  299667 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.595µs
	I1205 07:45:25.523288  299667 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:45:25.523277  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:45:25.523289  299667 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:25.523237  299667 fix.go:54] fixHost starting: 
	I1205 07:45:25.523319  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:45:25.523328  299667 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 144.182µs
	I1205 07:45:25.523335  299667 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:45:25.523296  299667 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 85.228µs
	I1205 07:45:25.523346  299667 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:45:25.523368  299667 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:45:25.523373  299667 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 85.498µs
	I1205 07:45:25.523378  299667 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:45:25.523390  299667 cache.go:87] Successfully saved all images to host disk.
	I1205 07:45:25.523585  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.542111  299667 fix.go:112] recreateIfNeeded on newest-cni-622440: state=Stopped err=<nil>
	W1205 07:45:25.542142  299667 fix.go:138] unexpected machine state, will restart: <nil>
	W1205 07:45:26.103157  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:26.555898  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:26.616440  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:26.616472  297527 retry.go:31] will retry after 4.350402654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.227883  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:27.290238  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:27.290274  297527 retry.go:31] will retry after 4.46337589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:28.602428  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:25.545608  299667 out.go:252] * Restarting existing docker container for "newest-cni-622440" ...
	I1205 07:45:25.545717  299667 cli_runner.go:164] Run: docker start newest-cni-622440
	I1205 07:45:25.826053  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:25.856383  299667 kic.go:430] container "newest-cni-622440" state is running.
	I1205 07:45:25.856775  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:25.877321  299667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/config.json ...
	I1205 07:45:25.877542  299667 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:25.878047  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:25.903226  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:25.903553  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:25.903561  299667 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:25.904107  299667 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35184->127.0.0.1:33103: read: connection reset by peer
	I1205 07:45:29.056730  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.056754  299667 ubuntu.go:182] provisioning hostname "newest-cni-622440"
	I1205 07:45:29.056818  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.074923  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.075238  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.075256  299667 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-622440 && echo "newest-cni-622440" | sudo tee /etc/hostname
	I1205 07:45:29.238817  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-622440
	
	I1205 07:45:29.238924  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.256394  299667 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:29.256698  299667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1205 07:45:29.256720  299667 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-622440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-622440/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-622440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:29.409360  299667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:29.409384  299667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 07:45:29.409403  299667 ubuntu.go:190] setting up certificates
	I1205 07:45:29.409412  299667 provision.go:84] configureAuth start
	I1205 07:45:29.409469  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:29.426522  299667 provision.go:143] copyHostCerts
	I1205 07:45:29.426598  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 07:45:29.426610  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 07:45:29.426695  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 07:45:29.426806  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 07:45:29.426817  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 07:45:29.426846  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 07:45:29.426910  299667 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 07:45:29.426920  299667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 07:45:29.426946  299667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 07:45:29.427008  299667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.newest-cni-622440 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-622440]
	I1205 07:45:29.583992  299667 provision.go:177] copyRemoteCerts
	I1205 07:45:29.584079  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:29.584142  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.601241  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.705331  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:45:29.723929  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 07:45:29.741035  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:45:29.758654  299667 provision.go:87] duration metric: took 349.219709ms to configureAuth
	I1205 07:45:29.758682  299667 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:29.758882  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:29.758893  299667 machine.go:97] duration metric: took 3.881342431s to provisionDockerMachine
	I1205 07:45:29.758901  299667 start.go:293] postStartSetup for "newest-cni-622440" (driver="docker")
	I1205 07:45:29.758917  299667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:29.758966  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:29.759008  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.777016  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:29.881927  299667 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:29.889885  299667 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:29.889915  299667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:29.889927  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 07:45:29.889986  299667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 07:45:29.890075  299667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 07:45:29.890181  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:29.899716  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:29.920554  299667 start.go:296] duration metric: took 161.628343ms for postStartSetup
	I1205 07:45:29.920647  299667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:29.920717  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:29.938834  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.040045  299667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:30.045649  299667 fix.go:56] duration metric: took 4.522402293s for fixHost
	I1205 07:45:30.045683  299667 start.go:83] releasing machines lock for "newest-cni-622440", held for 4.522453444s
	I1205 07:45:30.045767  299667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-622440
	I1205 07:45:30.065623  299667 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:30.065678  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.065694  299667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:30.065761  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:30.087940  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.099183  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:30.281502  299667 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:30.288110  299667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:30.292481  299667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:30.292550  299667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:30.300562  299667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:30.300584  299667 start.go:496] detecting cgroup driver to use...
	I1205 07:45:30.300616  299667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:30.300666  299667 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 07:45:30.318364  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 07:45:30.332088  299667 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:30.332151  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:30.348258  299667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:30.361775  299667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:30.469361  299667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:30.577441  299667 docker.go:234] disabling docker service ...
	I1205 07:45:30.577508  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:30.592915  299667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:30.607578  299667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:30.752107  299667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:30.872747  299667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:30.888408  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:30.904134  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 07:45:30.914385  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 07:45:30.923315  299667 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 07:45:30.923423  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 07:45:30.932175  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.940943  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 07:45:30.949729  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 07:45:30.958228  299667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:30.965941  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 07:45:30.980042  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 07:45:30.995740  299667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 07:45:31.009747  299667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:31.019595  299667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:31.028525  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.153254  299667 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 07:45:31.252043  299667 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 07:45:31.252123  299667 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 07:45:31.255724  299667 start.go:564] Will wait 60s for crictl version
	I1205 07:45:31.255784  299667 ssh_runner.go:195] Run: which crictl
	I1205 07:45:31.259402  299667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:31.288033  299667 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 07:45:31.288102  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.310723  299667 ssh_runner.go:195] Run: containerd --version
	I1205 07:45:31.334839  299667 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1205 07:45:31.337671  299667 cli_runner.go:164] Run: docker network inspect newest-cni-622440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:31.359874  299667 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:31.365663  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.387524  299667 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1205 07:45:31.390412  299667 kubeadm.go:884] updating cluster {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:31.390547  299667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1205 07:45:31.390648  299667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:31.429142  299667 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 07:45:31.429206  299667 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:31.429215  299667 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1205 07:45:31.429338  299667 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-622440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:31.429419  299667 ssh_runner.go:195] Run: sudo crictl info
	I1205 07:45:31.463460  299667 cni.go:84] Creating CNI manager for ""
	I1205 07:45:31.463487  299667 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 07:45:31.463511  299667 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1205 07:45:31.463580  299667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-622440 NodeName:newest-cni-622440 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:31.463714  299667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-622440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:31.463789  299667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:45:31.471606  299667 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:31.471702  299667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:31.480080  299667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 07:45:31.492950  299667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:45:31.505530  299667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1205 07:45:31.518323  299667 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:31.521961  299667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:45:31.531618  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:31.655593  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:31.673339  299667 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440 for IP: 192.168.85.2
	I1205 07:45:31.673398  299667 certs.go:195] generating shared ca certs ...
	I1205 07:45:31.673427  299667 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:31.673592  299667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 07:45:31.673665  299667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 07:45:31.673695  299667 certs.go:257] generating profile certs ...
	I1205 07:45:31.673812  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/client.key
	I1205 07:45:31.673907  299667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key.6d55f1d8
	I1205 07:45:31.673970  299667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key
	I1205 07:45:31.674103  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 07:45:31.674164  299667 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:31.674197  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:31.674246  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:45:31.674289  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:31.674341  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:31.674413  299667 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 07:45:31.675038  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:31.699874  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:31.718981  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:31.739011  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:31.757897  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 07:45:31.776123  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:31.794286  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:31.815714  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/newest-cni-622440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:45:31.832875  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:31.851417  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 07:45:31.868401  299667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 07:45:31.885858  299667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:31.898468  299667 ssh_runner.go:195] Run: openssl version
	I1205 07:45:31.904594  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.911851  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 07:45:31.919124  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922684  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.922758  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 07:45:31.963682  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:31.970739  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.977808  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:31.985046  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988699  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:31.988790  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:32.029966  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:32.037736  299667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.045196  299667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 07:45:32.052663  299667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056573  299667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.056689  299667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 07:45:32.097976  299667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:32.106452  299667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:32.110712  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:32.154012  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:32.194946  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:32.235499  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:32.276192  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:32.316778  299667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:32.357969  299667 kubeadm.go:401] StartCluster: {Name:newest-cni-622440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-622440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:32.358063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:32.358128  299667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:32.393923  299667 cri.go:89] found id: ""
	I1205 07:45:32.393993  299667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:32.401825  299667 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:32.401893  299667 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:32.401977  299667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:32.409190  299667 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:32.409869  299667 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-622440" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.410186  299667 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-2385/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-622440" cluster setting kubeconfig missing "newest-cni-622440" context setting]
	I1205 07:45:32.410754  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.412652  299667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:32.420082  299667 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1205 07:45:32.420112  299667 kubeadm.go:602] duration metric: took 18.200733ms to restartPrimaryControlPlane
	I1205 07:45:32.420122  299667 kubeadm.go:403] duration metric: took 62.162615ms to StartCluster
	I1205 07:45:32.420136  299667 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.420193  299667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:45:32.421089  299667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:32.421340  299667 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 07:45:32.421617  299667 config.go:182] Loaded profile config "newest-cni-622440": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:32.421690  299667 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:32.421796  299667 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-622440"
	I1205 07:45:32.421816  299667 addons.go:70] Setting default-storageclass=true in profile "newest-cni-622440"
	I1205 07:45:32.421860  299667 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-622440"
	I1205 07:45:32.421826  299667 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-622440"
	I1205 07:45:32.421949  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.422169  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.422375  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.421807  299667 addons.go:70] Setting dashboard=true in profile "newest-cni-622440"
	I1205 07:45:32.422859  299667 addons.go:239] Setting addon dashboard=true in "newest-cni-622440"
	W1205 07:45:32.422869  299667 addons.go:248] addon dashboard should already be in state true
	I1205 07:45:32.422895  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.423306  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.425911  299667 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:32.429270  299667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:32.459552  299667 addons.go:239] Setting addon default-storageclass=true in "newest-cni-622440"
	I1205 07:45:32.459590  299667 host.go:66] Checking if "newest-cni-622440" exists ...
	I1205 07:45:32.459994  299667 cli_runner.go:164] Run: docker container inspect newest-cni-622440 --format={{.State.Status}}
	I1205 07:45:32.466676  299667 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:45:32.469573  299667 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 07:45:32.469693  299667 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.469710  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:45:32.469779  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.479022  299667 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1205 07:45:30.602600  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:30.967025  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:31.052948  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.052985  297527 retry.go:31] will retry after 7.944795354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.285879  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:31.386500  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.386531  297527 retry.go:31] will retry after 6.357223814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.754709  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:31.845913  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:31.845950  297527 retry.go:31] will retry after 12.860014736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.103254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:32.484603  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 07:45:32.484629  299667 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 07:45:32.484694  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.517396  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.529599  299667 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.529620  299667 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:45:32.529685  299667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-622440
	I1205 07:45:32.549325  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.574838  299667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/newest-cni-622440/id_rsa Username:docker}
	I1205 07:45:32.643911  299667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:32.670090  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:32.687313  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 07:45:32.687343  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 07:45:32.721498  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 07:45:32.721518  299667 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 07:45:32.728026  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:32.759870  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 07:45:32.759892  299667 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1205 07:45:32.773100  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 07:45:32.773119  299667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 07:45:32.790813  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 07:45:32.790887  299667 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 07:45:32.806943  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 07:45:32.807008  299667 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 07:45:32.827525  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 07:45:32.827547  299667 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 07:45:32.840144  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 07:45:32.840166  299667 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 07:45:32.856122  299667 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:32.856196  299667 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 07:45:32.869771  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:33.097468  299667 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:45:33.097593  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:33.097728  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097794  299667 retry.go:31] will retry after 241.658936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.097872  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.097907  299667 retry.go:31] will retry after 176.603947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.098118  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.098157  299667 retry.go:31] will retry after 229.408257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.275635  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:45:33.328106  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.333654  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.333699  299667 retry.go:31] will retry after 493.072495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.339842  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:33.420976  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421140  299667 retry.go:31] will retry after 232.443098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.421103  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.421275  299667 retry.go:31] will retry after 218.243264ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.598377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:33.640183  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:33.654611  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:33.714507  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.714586  299667 retry.go:31] will retry after 296.021108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:33.735889  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.735929  299667 retry.go:31] will retry after 647.569018ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.827334  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:33.912321  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:33.912410  299667 retry.go:31] will retry after 511.925432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.011792  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:34.070223  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.070270  299667 retry.go:31] will retry after 1.045041767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.098366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:34.384609  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:45:34.425097  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:34.456662  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.456771  299667 retry.go:31] will retry after 1.012360732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:34.490780  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.490815  299667 retry.go:31] will retry after 673.94662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:34.598028  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:35.602346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:37.602757  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:37.744028  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.809224  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.809268  297527 retry.go:31] will retry after 8.525278844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.998921  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:39.069453  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.069501  297527 retry.go:31] will retry after 21.498999078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.097803  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:35.115652  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:35.165241  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:35.189445  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.189528  299667 retry.go:31] will retry after 873.335351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:35.234071  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.234107  299667 retry.go:31] will retry after 1.250813401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.469343  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:35.535355  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.535386  299667 retry.go:31] will retry after 1.457971594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:35.598793  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.063166  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:45:36.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:36.141912  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.141992  299667 retry.go:31] will retry after 1.289648417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.485696  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:36.544841  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.544879  299667 retry.go:31] will retry after 2.662984572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:36.598226  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:36.993607  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:37.063691  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.063774  299667 retry.go:31] will retry after 1.151172803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.098032  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:37.431865  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:37.492142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.492177  299667 retry.go:31] will retry after 3.504601193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:37.598357  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.098363  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:38.215346  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:38.274274  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.274309  299667 retry.go:31] will retry after 1.757329115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:38.597749  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.097719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:39.208847  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:39.266142  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.266182  299667 retry.go:31] will retry after 3.436463849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:39.598395  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.031973  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:40.102833  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:42.602360  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:44.706625  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:40.092374  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.092409  299667 retry.go:31] will retry after 2.182976597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:40.098469  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.598422  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:40.997583  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:41.059423  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.059455  299667 retry.go:31] will retry after 3.560419221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:41.098613  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:41.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.098453  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.276211  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:42.351488  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.351524  299667 retry.go:31] will retry after 9.602308898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.598167  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:42.703420  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:42.760290  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:42.760322  299667 retry.go:31] will retry after 5.381602643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:43.097810  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:43.597706  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.098335  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.597780  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:44.620405  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:44.677458  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.677489  299667 retry.go:31] will retry after 4.279612118s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:44.764830  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:44.764865  297527 retry.go:31] will retry after 17.369945393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:45.102956  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:46.334817  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:46.418483  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:46.418521  297527 retry.go:31] will retry after 23.303020683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:47.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:49.602799  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:45.098273  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:45.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.098640  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:46.597868  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.097740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:47.597768  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.097748  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.142199  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:48.202751  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.202784  299667 retry.go:31] will retry after 9.130347643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:48.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:48.958075  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:49.020580  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.020664  299667 retry.go:31] will retry after 5.816091686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:49.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:49.597778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:45:52.102357  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:54.603289  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:50.097903  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:50.598277  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.098323  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.598320  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:51.954438  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:45:52.018482  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.018522  299667 retry.go:31] will retry after 11.887626777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:52.098608  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:52.598374  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.098377  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:53.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.098330  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.597906  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:54.837992  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:45:54.928421  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:54.928451  299667 retry.go:31] will retry after 21.232814528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:45:57.103152  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:45:59.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:45:55.097998  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:55.598566  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.098233  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:56.598487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:57.333368  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:45:57.391373  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.391409  299667 retry.go:31] will retry after 6.534046571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:45:57.598447  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.098487  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:58.597673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.098584  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:59.597752  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.568740  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:00.647111  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:00.647143  297527 retry.go:31] will retry after 19.124891194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:01.602386  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:02.135738  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:02.196508  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:02.196541  297527 retry.go:31] will retry after 23.234297555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:00.111473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:00.597738  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.097860  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:01.597786  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:02.598349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.097778  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:03.906517  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:03.926085  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:03.977088  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:03.977126  299667 retry.go:31] will retry after 8.615984736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:04.014857  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.014953  299667 retry.go:31] will retry after 11.096851447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:04.098074  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:04.598727  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:06.103226  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:08.602282  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:09.722604  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:05.098302  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:05.598378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.098313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:06.598356  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.098365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:07.597739  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.098396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:08.597740  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.098581  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:09.598396  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:09.788810  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:09.788894  297527 retry.go:31] will retry after 37.030083188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:10.602342  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:13.102826  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:10.098145  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:10.597796  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.097819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:11.598431  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.098421  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:12.593706  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:12.598498  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:12.687257  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:12.687290  299667 retry.go:31] will retry after 19.919210015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:13.098633  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:13.598345  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.097716  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:14.598398  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:15.602302  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:17.603239  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:15.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:15.112618  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:15.170666  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.170700  299667 retry.go:31] will retry after 26.586504873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:15.598228  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.097761  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:16.161584  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:16.224162  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.224193  299667 retry.go:31] will retry after 29.423350117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:16.597722  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.097721  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:17.597743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.098656  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:18.598271  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.098404  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.598719  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:19.772903  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:19.832639  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:19.832668  297527 retry.go:31] will retry after 32.800355392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:20.103191  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:22.602639  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:24.603138  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:20.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:20.597725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.097770  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:21.598319  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.098284  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:22.598325  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.097718  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:23.597745  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.098368  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:24.598400  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.431569  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:25.488990  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:25.489023  297527 retry.go:31] will retry after 28.819883279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:27.102333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:29.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:25.098708  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:25.597766  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.098393  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:26.598238  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.098573  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:27.598365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.098362  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:28.598524  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.097726  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:29.598366  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:46:31.103394  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:33.602924  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:30.098021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:30.598337  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.098378  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:31.597777  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.097725  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:32.597622  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:32.597702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:32.607176  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 07:46:32.654366  299667 cri.go:89] found id: ""
	I1205 07:46:32.654387  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.654395  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:32.654402  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:32.654460  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:46:32.707430  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707464  299667 retry.go:31] will retry after 35.686554771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:32.707503  299667 cri.go:89] found id: ""
	I1205 07:46:32.707512  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.707519  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:32.707525  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:32.707583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:32.732319  299667 cri.go:89] found id: ""
	I1205 07:46:32.732341  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.732350  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:32.732356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:32.732414  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:32.756204  299667 cri.go:89] found id: ""
	I1205 07:46:32.756226  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.756235  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:32.756241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:32.756313  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:32.785401  299667 cri.go:89] found id: ""
	I1205 07:46:32.785423  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.785431  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:32.785437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:32.785493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:32.811348  299667 cri.go:89] found id: ""
	I1205 07:46:32.811373  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.811381  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:32.811388  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:32.811461  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:32.835578  299667 cri.go:89] found id: ""
	I1205 07:46:32.835603  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.835612  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:32.835618  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:32.835679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:32.861749  299667 cri.go:89] found id: ""
	I1205 07:46:32.861773  299667 logs.go:282] 0 containers: []
	W1205 07:46:32.861781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:32.861790  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:32.861801  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:32.937533  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:32.929418    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.930083    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.931826    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.932339    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:32.933944    1828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:32.937555  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:32.937568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:32.962127  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:32.962161  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:32.989223  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:32.989256  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:33.046092  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:33.046128  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:46:36.102426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:38.602828  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:35.559882  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:35.570602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:35.570679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:35.597322  299667 cri.go:89] found id: ""
	I1205 07:46:35.597348  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.597358  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:35.597364  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:35.597420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:35.631556  299667 cri.go:89] found id: ""
	I1205 07:46:35.631585  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.631594  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:35.631605  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:35.631670  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:35.666766  299667 cri.go:89] found id: ""
	I1205 07:46:35.666790  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.666808  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:35.666851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:35.666928  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:35.696469  299667 cri.go:89] found id: ""
	I1205 07:46:35.696494  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.696503  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:35.696510  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:35.696570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:35.721564  299667 cri.go:89] found id: ""
	I1205 07:46:35.721587  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.721613  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:35.721620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:35.721679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:35.750450  299667 cri.go:89] found id: ""
	I1205 07:46:35.750474  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.750483  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:35.750490  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:35.750577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:35.779075  299667 cri.go:89] found id: ""
	I1205 07:46:35.779097  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.779105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:35.779111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:35.779171  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:35.804778  299667 cri.go:89] found id: ""
	I1205 07:46:35.804849  299667 logs.go:282] 0 containers: []
	W1205 07:46:35.804870  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:35.804891  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:35.804928  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:35.818664  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:35.818691  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:35.896985  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:35.889732    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.890522    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892074    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.892387    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:35.893905    1945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:35.897010  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:35.897023  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:35.922964  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:35.922997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:35.950985  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:35.951012  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.510773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:38.521214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:38.521283  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:38.547037  299667 cri.go:89] found id: ""
	I1205 07:46:38.547061  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.547069  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:38.547088  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:38.547152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:38.571870  299667 cri.go:89] found id: ""
	I1205 07:46:38.571894  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.571903  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:38.571909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:38.571967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:38.597667  299667 cri.go:89] found id: ""
	I1205 07:46:38.597693  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.597701  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:38.597707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:38.597781  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:38.634302  299667 cri.go:89] found id: ""
	I1205 07:46:38.634328  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.634336  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:38.634343  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:38.634411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:38.662787  299667 cri.go:89] found id: ""
	I1205 07:46:38.662813  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.662822  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:38.662829  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:38.662886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:38.688000  299667 cri.go:89] found id: ""
	I1205 07:46:38.688026  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.688034  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:38.688040  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:38.688108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:38.712589  299667 cri.go:89] found id: ""
	I1205 07:46:38.712611  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.712619  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:38.712631  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:38.712688  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:38.736469  299667 cri.go:89] found id: ""
	I1205 07:46:38.736490  299667 logs.go:282] 0 containers: []
	W1205 07:46:38.736499  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:38.736507  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:38.736521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:38.763556  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:38.763586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:38.818344  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:38.818379  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:38.832020  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:38.832054  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:38.931143  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:38.913874    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.914687    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.925359    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.926090    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:38.927798    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:38.931164  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:38.931178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:40.603153  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:43.102740  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:41.457376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:41.468655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:41.468729  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:41.496317  299667 cri.go:89] found id: ""
	I1205 07:46:41.496391  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.496415  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:41.496434  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:41.496520  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:41.522205  299667 cri.go:89] found id: ""
	I1205 07:46:41.522230  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.522238  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:41.522244  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:41.522304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:41.547643  299667 cri.go:89] found id: ""
	I1205 07:46:41.547668  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.547677  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:41.547684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:41.547743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:41.576000  299667 cri.go:89] found id: ""
	I1205 07:46:41.576024  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.576032  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:41.576039  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:41.576093  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:41.610347  299667 cri.go:89] found id: ""
	I1205 07:46:41.610373  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.610393  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:41.610399  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:41.610455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:41.641947  299667 cri.go:89] found id: ""
	I1205 07:46:41.641974  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.641983  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:41.641990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:41.642049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:41.680331  299667 cri.go:89] found id: ""
	I1205 07:46:41.680355  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.680363  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:41.680370  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:41.680426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:41.707279  299667 cri.go:89] found id: ""
	I1205 07:46:41.707301  299667 logs.go:282] 0 containers: []
	W1205 07:46:41.707310  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:41.707319  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:41.707331  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:41.720629  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:41.720654  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 07:46:41.757919  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:41.789558  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:41.777918    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.778570    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.783933    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.784533    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:41.786282    2170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:41.789582  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:41.789596  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:41.829441  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.829475  299667 retry.go:31] will retry after 23.380573162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:41.840285  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:41.840316  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:41.875962  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:41.875990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.439978  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:44.450947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:44.451025  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:44.476311  299667 cri.go:89] found id: ""
	I1205 07:46:44.476335  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.476344  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:44.476350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:44.476420  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:44.501030  299667 cri.go:89] found id: ""
	I1205 07:46:44.501064  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.501073  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:44.501078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:44.501138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:44.525674  299667 cri.go:89] found id: ""
	I1205 07:46:44.525697  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.525705  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:44.525711  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:44.525769  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:44.554878  299667 cri.go:89] found id: ""
	I1205 07:46:44.554903  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.554911  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:44.554918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:44.554991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:44.579773  299667 cri.go:89] found id: ""
	I1205 07:46:44.579796  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.579805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:44.579811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:44.579867  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:44.611991  299667 cri.go:89] found id: ""
	I1205 07:46:44.612017  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.612042  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:44.612049  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:44.612108  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:44.646395  299667 cri.go:89] found id: ""
	I1205 07:46:44.646418  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.646427  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:44.646433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:44.646499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:44.674148  299667 cri.go:89] found id: ""
	I1205 07:46:44.674170  299667 logs.go:282] 0 containers: []
	W1205 07:46:44.674178  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:44.674187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:44.674199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:44.734427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:44.734469  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:44.748531  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:44.748561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:44.815565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:44.808155    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.809633    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.810196    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811231    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:44.811730    2290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:44.815586  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:44.815601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:44.841456  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:44.841492  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:46:45.103537  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:46.819177  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:46:46.909187  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:46.909286  297527 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:47.602297  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:49.602426  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:45.648666  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:45.706769  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:45.706803  299667 retry.go:31] will retry after 32.901994647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 07:46:47.381509  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:47.392949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:47.393065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:47.424033  299667 cri.go:89] found id: ""
	I1205 07:46:47.424057  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.424066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:47.424072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:47.424140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:47.451239  299667 cri.go:89] found id: ""
	I1205 07:46:47.451265  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.451275  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:47.451282  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:47.451342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:47.475229  299667 cri.go:89] found id: ""
	I1205 07:46:47.475250  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.475259  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:47.475265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:47.475322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:47.500010  299667 cri.go:89] found id: ""
	I1205 07:46:47.500036  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.500045  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:47.500051  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:47.500110  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:47.525665  299667 cri.go:89] found id: ""
	I1205 07:46:47.525691  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.525700  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:47.525707  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:47.525767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:47.550876  299667 cri.go:89] found id: ""
	I1205 07:46:47.550902  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.550911  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:47.550917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:47.550978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:47.574838  299667 cri.go:89] found id: ""
	I1205 07:46:47.574904  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.574926  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:47.574940  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:47.575018  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:47.606672  299667 cri.go:89] found id: ""
	I1205 07:46:47.606698  299667 logs.go:282] 0 containers: []
	W1205 07:46:47.606707  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:47.606716  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:47.606728  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:47.644360  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:47.644388  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:47.706982  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:47.707019  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:47.720731  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:47.720759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:47.782357  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:47.775020    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.775660    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777315    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.777837    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:47.779353    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:47.782378  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:47.782393  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:46:51.603232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:52.633653  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:46:52.692000  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:52.692106  297527 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:46:54.102683  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:54.310076  297527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1205 07:46:54.372261  297527 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:46:54.372370  297527 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:46:54.375412  297527 out.go:179] * Enabled addons: 
	I1205 07:46:54.378282  297527 addons.go:530] duration metric: took 1m42.739564939s for enable addons: enabled=[]
	I1205 07:46:50.307630  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:50.318086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:50.318159  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:50.342816  299667 cri.go:89] found id: ""
	I1205 07:46:50.342838  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.342847  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:50.342853  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:50.342921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:50.371375  299667 cri.go:89] found id: ""
	I1205 07:46:50.371440  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.371462  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:50.371478  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:50.371566  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:50.401098  299667 cri.go:89] found id: ""
	I1205 07:46:50.401206  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.401224  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:50.401245  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:50.401310  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:50.432101  299667 cri.go:89] found id: ""
	I1205 07:46:50.432134  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.432143  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:50.432149  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:50.432262  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:50.457371  299667 cri.go:89] found id: ""
	I1205 07:46:50.457396  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.457405  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:50.457413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:50.457469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:50.486796  299667 cri.go:89] found id: ""
	I1205 07:46:50.486821  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.486830  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:50.486836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:50.486945  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:50.515505  299667 cri.go:89] found id: ""
	I1205 07:46:50.515529  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.515537  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:50.515544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:50.515606  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:50.543462  299667 cri.go:89] found id: ""
	I1205 07:46:50.543486  299667 logs.go:282] 0 containers: []
	W1205 07:46:50.543495  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:50.543503  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:50.543561  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:50.600091  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:50.600276  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:50.619872  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:50.619944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:50.690141  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:50.682244    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.682926    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.684648    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.685004    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:50.686442    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:50.690160  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:50.690173  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:50.715362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:50.715398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:53.244467  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:53.256174  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:53.256240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:53.279782  299667 cri.go:89] found id: ""
	I1205 07:46:53.279803  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.279810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:53.279817  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:53.279878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:53.303793  299667 cri.go:89] found id: ""
	I1205 07:46:53.303813  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.303821  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:53.303827  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:53.303884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:53.332886  299667 cri.go:89] found id: ""
	I1205 07:46:53.332908  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.332916  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:53.332922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:53.332981  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:53.359130  299667 cri.go:89] found id: ""
	I1205 07:46:53.359153  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.359161  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:53.359168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:53.359229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:53.384922  299667 cri.go:89] found id: ""
	I1205 07:46:53.384947  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.384966  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:53.384972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:53.385033  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:53.409882  299667 cri.go:89] found id: ""
	I1205 07:46:53.409903  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.409912  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:53.409918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:53.409982  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:53.435229  299667 cri.go:89] found id: ""
	I1205 07:46:53.435254  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.435263  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:53.435269  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:53.435326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:53.460378  299667 cri.go:89] found id: ""
	I1205 07:46:53.460402  299667 logs.go:282] 0 containers: []
	W1205 07:46:53.460411  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:53.460419  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:53.460430  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:53.515653  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:53.515686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:53.529252  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:53.529277  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:53.590407  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:53.583367    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.583919    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585470    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.585888    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:53.587316    2634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:53.590427  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:53.590439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:53.615638  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:53.615670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:46:56.102997  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:46:58.602448  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:46:56.149491  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:56.160491  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:56.160560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:56.186032  299667 cri.go:89] found id: ""
	I1205 07:46:56.186055  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.186063  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:56.186069  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:56.186127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:56.210655  299667 cri.go:89] found id: ""
	I1205 07:46:56.210683  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.210691  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:56.210698  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:56.210760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:56.236968  299667 cri.go:89] found id: ""
	I1205 07:46:56.237039  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.237060  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:56.237078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:56.237197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:56.261470  299667 cri.go:89] found id: ""
	I1205 07:46:56.261543  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.261559  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:56.261567  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:56.261626  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:56.287544  299667 cri.go:89] found id: ""
	I1205 07:46:56.287569  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.287578  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:56.287586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:56.287664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:56.313083  299667 cri.go:89] found id: ""
	I1205 07:46:56.313154  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.313200  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:56.313222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:56.313290  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:56.338841  299667 cri.go:89] found id: ""
	I1205 07:46:56.338865  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.338879  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:56.338886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:56.338971  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:56.364821  299667 cri.go:89] found id: ""
	I1205 07:46:56.364883  299667 logs.go:282] 0 containers: []
	W1205 07:46:56.364906  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:56.364927  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:56.364953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:56.421380  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:56.421412  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:56.434797  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:56.434825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:56.500557  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:56.493253    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.493860    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495552    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.495893    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:56.497464    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:56.500579  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:56.500592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:56.525423  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:56.525453  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.059925  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:46:59.070350  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:59.070417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:59.106211  299667 cri.go:89] found id: ""
	I1205 07:46:59.106234  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.106242  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:59.106250  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:46:59.106308  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:59.134075  299667 cri.go:89] found id: ""
	I1205 07:46:59.134101  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.134110  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:46:59.134116  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:46:59.134173  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:59.163091  299667 cri.go:89] found id: ""
	I1205 07:46:59.163119  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.163128  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:46:59.163134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:59.163195  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:59.189283  299667 cri.go:89] found id: ""
	I1205 07:46:59.189308  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.189316  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:59.189323  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:59.189384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:59.214391  299667 cri.go:89] found id: ""
	I1205 07:46:59.214416  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.214433  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:59.214439  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:59.214498  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:59.246223  299667 cri.go:89] found id: ""
	I1205 07:46:59.246246  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.246255  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:59.246262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:59.246321  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:59.274955  299667 cri.go:89] found id: ""
	I1205 07:46:59.274991  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.274999  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:59.275006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:46:59.275074  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:46:59.302932  299667 cri.go:89] found id: ""
	I1205 07:46:59.302956  299667 logs.go:282] 0 containers: []
	W1205 07:46:59.302965  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:46:59.302984  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:59.302997  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:59.362548  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:46:59.355378    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.356168    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357705    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.357979    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:46:59.359438    2851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:59.362571  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:46:59.362583  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:46:59.387053  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:46:59.387085  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:59.413739  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:59.413767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:59.469532  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:59.469569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:00.602658  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:03.102385  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:01.983455  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:01.994190  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:01.994316  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:02.023883  299667 cri.go:89] found id: ""
	I1205 07:47:02.023913  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.023922  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:02.023929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:02.023992  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:02.050293  299667 cri.go:89] found id: ""
	I1205 07:47:02.050367  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.050383  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:02.050390  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:02.050458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:02.076131  299667 cri.go:89] found id: ""
	I1205 07:47:02.076157  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.076166  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:02.076172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:02.076235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:02.115590  299667 cri.go:89] found id: ""
	I1205 07:47:02.115623  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.115632  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:02.115638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:02.115733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:02.155255  299667 cri.go:89] found id: ""
	I1205 07:47:02.155281  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.155290  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:02.155297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:02.155355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:02.184142  299667 cri.go:89] found id: ""
	I1205 07:47:02.184169  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.184178  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:02.184185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:02.184244  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:02.208969  299667 cri.go:89] found id: ""
	I1205 07:47:02.208997  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.209006  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:02.209036  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:02.209126  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:02.233523  299667 cri.go:89] found id: ""
	I1205 07:47:02.233556  299667 logs.go:282] 0 containers: []
	W1205 07:47:02.233565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:02.233597  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:02.233609  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:02.289818  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:02.289852  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:02.303686  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:02.303756  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:02.370663  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:02.363198    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.363884    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.365534    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.366033    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:02.367548    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:02.370711  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:02.370723  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:02.395466  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:02.395508  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:04.925546  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:04.937771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:04.937866  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:04.967009  299667 cri.go:89] found id: ""
	I1205 07:47:04.967031  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.967039  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:04.967046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:04.967103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:04.998327  299667 cri.go:89] found id: ""
	I1205 07:47:04.998351  299667 logs.go:282] 0 containers: []
	W1205 07:47:04.998360  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:04.998365  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:04.998426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:05.026478  299667 cri.go:89] found id: ""
	I1205 07:47:05.026505  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.026513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:05.026521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:05.026583  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:05.051556  299667 cri.go:89] found id: ""
	I1205 07:47:05.051580  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.051588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:05.051595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:05.051658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:05.078546  299667 cri.go:89] found id: ""
	I1205 07:47:05.078570  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.078579  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:05.078585  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:05.078649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1205 07:47:05.102744  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:07.602359  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:09.603452  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:05.107928  299667 cri.go:89] found id: ""
	I1205 07:47:05.107955  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.107964  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:05.107971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:05.108035  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:05.134695  299667 cri.go:89] found id: ""
	I1205 07:47:05.134718  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.134727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:05.134733  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:05.134792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:05.160991  299667 cri.go:89] found id: ""
	I1205 07:47:05.161017  299667 logs.go:282] 0 containers: []
	W1205 07:47:05.161025  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:05.161035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:05.161048  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:05.211053  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:47:05.219354  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:05.219426  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:47:05.274067  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:05.274165  299667 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:05.274831  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:05.274851  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:05.336443  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:05.329296    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.329874    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331388    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.331708    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:05.333184    3091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:05.336473  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:05.336486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:05.361343  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:05.361374  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:07.887800  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:07.899185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:07.899259  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:07.927401  299667 cri.go:89] found id: ""
	I1205 07:47:07.927423  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.927431  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:07.927437  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:07.927511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:07.958986  299667 cri.go:89] found id: ""
	I1205 07:47:07.959008  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.959017  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:07.959023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:07.959081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:07.986953  299667 cri.go:89] found id: ""
	I1205 07:47:07.986974  299667 logs.go:282] 0 containers: []
	W1205 07:47:07.986983  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:07.986989  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:07.987052  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:08.013548  299667 cri.go:89] found id: ""
	I1205 07:47:08.013573  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.013581  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:08.013590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:08.013654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:08.039626  299667 cri.go:89] found id: ""
	I1205 07:47:08.039650  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.039658  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:08.039664  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:08.039724  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:08.064448  299667 cri.go:89] found id: ""
	I1205 07:47:08.064472  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.064482  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:08.064489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:08.064548  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:08.089144  299667 cri.go:89] found id: ""
	I1205 07:47:08.089234  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.089250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:08.089257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:08.089325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:08.124837  299667 cri.go:89] found id: ""
	I1205 07:47:08.124863  299667 logs.go:282] 0 containers: []
	W1205 07:47:08.124890  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:08.124900  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:08.124917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:08.155028  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:08.155055  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:08.215310  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:08.215346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:08.229549  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:08.229577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:08.292266  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:08.284556    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.285203    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.286447    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.287121    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:08.288990    3212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:08.292296  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:08.292309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:08.394608  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1205 07:47:08.457975  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:08.458074  299667 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1205 07:47:12.102433  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:14.102787  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:10.816831  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:10.827471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:10.827537  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:10.856590  299667 cri.go:89] found id: ""
	I1205 07:47:10.856612  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.856621  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:10.856626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:10.856687  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:10.887186  299667 cri.go:89] found id: ""
	I1205 07:47:10.887207  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.887215  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:10.887221  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:10.887279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:10.914460  299667 cri.go:89] found id: ""
	I1205 07:47:10.914482  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.914490  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:10.914497  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:10.914554  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:10.943070  299667 cri.go:89] found id: ""
	I1205 07:47:10.943095  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.943103  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:10.943109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:10.943167  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:10.967007  299667 cri.go:89] found id: ""
	I1205 07:47:10.967034  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.967043  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:10.967050  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:10.967142  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:10.990367  299667 cri.go:89] found id: ""
	I1205 07:47:10.990394  299667 logs.go:282] 0 containers: []
	W1205 07:47:10.990402  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:10.990408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:10.990465  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:11.021515  299667 cri.go:89] found id: ""
	I1205 07:47:11.021538  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.021547  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:11.021553  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:11.021616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:11.046137  299667 cri.go:89] found id: ""
	I1205 07:47:11.046159  299667 logs.go:282] 0 containers: []
	W1205 07:47:11.046168  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:11.046176  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:11.046190  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:11.071756  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:11.071787  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:11.101757  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:11.101784  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:11.175924  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:11.175962  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:11.190392  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:11.190424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:11.252655  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:11.245043    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.245806    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.247514    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.248076    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:11.249669    3334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:13.753819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:13.764287  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:13.764373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:13.790393  299667 cri.go:89] found id: ""
	I1205 07:47:13.790418  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.790426  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:13.790433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:13.790496  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:13.814911  299667 cri.go:89] found id: ""
	I1205 07:47:13.814935  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.814944  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:13.814951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:13.815007  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:13.839756  299667 cri.go:89] found id: ""
	I1205 07:47:13.839779  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.839787  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:13.839794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:13.839852  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:13.870908  299667 cri.go:89] found id: ""
	I1205 07:47:13.870933  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.870943  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:13.870949  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:13.871010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:13.902182  299667 cri.go:89] found id: ""
	I1205 07:47:13.902208  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.902216  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:13.902223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:13.902281  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:13.928077  299667 cri.go:89] found id: ""
	I1205 07:47:13.928102  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.928111  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:13.928117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:13.928174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:13.952673  299667 cri.go:89] found id: ""
	I1205 07:47:13.952706  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.952715  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:13.952721  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:13.952786  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:13.982104  299667 cri.go:89] found id: ""
	I1205 07:47:13.982137  299667 logs.go:282] 0 containers: []
	W1205 07:47:13.982147  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:13.982156  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:13.982168  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:14.047894  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:14.047925  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:14.061830  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:14.061861  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:14.145569  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:14.138019    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.138669    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140225    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.140720    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:14.142187    3434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:14.145587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:14.145601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:14.173369  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:14.173406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:16.701890  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:16.712471  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:16.712541  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:16.737364  299667 cri.go:89] found id: ""
	I1205 07:47:16.737386  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.737394  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:16.737400  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:16.737458  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:16.761826  299667 cri.go:89] found id: ""
	I1205 07:47:16.761849  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.761858  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:16.761864  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:16.761921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:16.787321  299667 cri.go:89] found id: ""
	I1205 07:47:16.787343  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.787352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:16.787359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:16.787419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:16.812059  299667 cri.go:89] found id: ""
	I1205 07:47:16.812080  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.812087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:16.812094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:16.812152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:16.835710  299667 cri.go:89] found id: ""
	I1205 07:47:16.835731  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.835739  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:16.835745  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:16.835804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:16.866817  299667 cri.go:89] found id: ""
	I1205 07:47:16.866839  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.866848  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:16.866854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:16.866915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:16.892855  299667 cri.go:89] found id: ""
	I1205 07:47:16.892877  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.892885  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:16.892891  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:16.892948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:16.921328  299667 cri.go:89] found id: ""
	I1205 07:47:16.921348  299667 logs.go:282] 0 containers: []
	W1205 07:47:16.921356  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:16.921365  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:16.921378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:16.975810  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:16.975843  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:16.989559  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:16.989589  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:17.052011  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:17.045268    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.045926    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.046967    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.047483    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:17.049021    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:17.052031  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:17.052044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:17.076823  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:17.076853  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:18.609402  299667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1205 07:47:18.686960  299667 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 07:47:18.687059  299667 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 07:47:18.690290  299667 out.go:179] * Enabled addons: 
	W1205 07:47:16.602616  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:19.102330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:18.693172  299667 addons.go:530] duration metric: took 1m46.271465904s for enable addons: enabled=[]
	I1205 07:47:19.612423  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:19.623124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:19.623194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:19.651237  299667 cri.go:89] found id: ""
	I1205 07:47:19.651260  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.651268  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:19.651276  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:19.651338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:19.679760  299667 cri.go:89] found id: ""
	I1205 07:47:19.679781  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.679790  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:19.679795  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:19.679854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:19.703620  299667 cri.go:89] found id: ""
	I1205 07:47:19.703640  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.703652  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:19.703658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:19.703731  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:19.727543  299667 cri.go:89] found id: ""
	I1205 07:47:19.727607  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.727629  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:19.727645  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:19.727736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:19.751580  299667 cri.go:89] found id: ""
	I1205 07:47:19.751606  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.751614  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:19.751620  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:19.751678  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:19.778033  299667 cri.go:89] found id: ""
	I1205 07:47:19.778058  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.778066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:19.778074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:19.778130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:19.805321  299667 cri.go:89] found id: ""
	I1205 07:47:19.805346  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.805354  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:19.805360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:19.805419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:19.828911  299667 cri.go:89] found id: ""
	I1205 07:47:19.828932  299667 logs.go:282] 0 containers: []
	W1205 07:47:19.828940  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:19.828949  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:19.828961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:19.842046  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:19.842072  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:19.924477  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:19.917019    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.917722    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919291    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.919788    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:19.921450    3661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:19.924542  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:19.924568  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:19.949241  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:19.949279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:19.977260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:19.977287  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:47:21.102389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:23.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:22.534572  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:22.545193  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:22.545272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:22.570057  299667 cri.go:89] found id: ""
	I1205 07:47:22.570083  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.570092  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:22.570098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:22.570163  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:22.595296  299667 cri.go:89] found id: ""
	I1205 07:47:22.595321  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.595330  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:22.595337  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:22.595421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:22.620283  299667 cri.go:89] found id: ""
	I1205 07:47:22.620307  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.620315  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:22.620322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:22.620399  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:22.644353  299667 cri.go:89] found id: ""
	I1205 07:47:22.644379  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.644389  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:22.644395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:22.644474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:22.674856  299667 cri.go:89] found id: ""
	I1205 07:47:22.674885  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.674894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:22.674900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:22.674980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:22.699975  299667 cri.go:89] found id: ""
	I1205 07:47:22.700002  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.700011  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:22.700018  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:22.700089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:22.725706  299667 cri.go:89] found id: ""
	I1205 07:47:22.725734  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.725743  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:22.725753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:22.725822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:22.750409  299667 cri.go:89] found id: ""
	I1205 07:47:22.750430  299667 logs.go:282] 0 containers: []
	W1205 07:47:22.750439  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:22.750459  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:22.750471  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:22.775719  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:22.775754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:22.806148  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:22.806175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:22.863750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:22.863786  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:22.878145  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:22.878174  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:22.945284  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:22.937785    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.938400    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940016    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.940571    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:22.942238    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:47:25.602789  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:28.102396  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:25.446099  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:25.457267  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:25.457345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:25.484246  299667 cri.go:89] found id: ""
	I1205 07:47:25.484273  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.484282  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:25.484289  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:25.484346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:25.513783  299667 cri.go:89] found id: ""
	I1205 07:47:25.513806  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.513815  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:25.513821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:25.513895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:25.542603  299667 cri.go:89] found id: ""
	I1205 07:47:25.542627  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.542636  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:25.542642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:25.542768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:25.566393  299667 cri.go:89] found id: ""
	I1205 07:47:25.566417  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.566427  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:25.566433  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:25.566510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:25.591113  299667 cri.go:89] found id: ""
	I1205 07:47:25.591148  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.591157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:25.591164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:25.591237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:25.619895  299667 cri.go:89] found id: ""
	I1205 07:47:25.619919  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.619928  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:25.619935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:25.619991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:25.645287  299667 cri.go:89] found id: ""
	I1205 07:47:25.645311  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.645319  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:25.645326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:25.645386  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:25.670944  299667 cri.go:89] found id: ""
	I1205 07:47:25.670967  299667 logs.go:282] 0 containers: []
	W1205 07:47:25.670975  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:25.671025  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:25.671043  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:25.728687  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:25.728721  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:25.743347  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:25.743373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:25.808046  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:25.800065    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.800704    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.802506    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.803182    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:25.804772    3881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:25.808069  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:25.808082  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:25.833265  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:25.833298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:28.366360  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:28.378460  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:28.378539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:28.413651  299667 cri.go:89] found id: ""
	I1205 07:47:28.413678  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.413687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:28.413694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:28.413755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:28.439196  299667 cri.go:89] found id: ""
	I1205 07:47:28.439223  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.439232  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:28.439238  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:28.439323  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:28.463516  299667 cri.go:89] found id: ""
	I1205 07:47:28.463587  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.463610  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:28.463628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:28.463709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:28.489425  299667 cri.go:89] found id: ""
	I1205 07:47:28.489450  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.489459  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:28.489467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:28.489560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:28.516772  299667 cri.go:89] found id: ""
	I1205 07:47:28.516797  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.516806  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:28.516812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:28.516872  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:28.543466  299667 cri.go:89] found id: ""
	I1205 07:47:28.543490  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.543498  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:28.543507  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:28.543564  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:28.568431  299667 cri.go:89] found id: ""
	I1205 07:47:28.568455  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.568463  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:28.568469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:28.568528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:28.593549  299667 cri.go:89] found id: ""
	I1205 07:47:28.593573  299667 logs.go:282] 0 containers: []
	W1205 07:47:28.593581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:28.593590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:28.593601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:28.652330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:28.652364  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:28.665857  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:28.665882  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:28.733864  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:28.725925    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.726464    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728268    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.728698    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:28.730143    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:28.733886  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:28.733898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:28.758935  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:28.758971  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:30.102577  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:32.602389  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:34.602704  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:31.286625  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:31.297007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:31.297075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:31.324486  299667 cri.go:89] found id: ""
	I1205 07:47:31.324508  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.324517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:31.324523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:31.324585  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:31.367211  299667 cri.go:89] found id: ""
	I1205 07:47:31.367234  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.367242  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:31.367249  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:31.367336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:31.398063  299667 cri.go:89] found id: ""
	I1205 07:47:31.398124  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.398148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:31.398166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:31.398239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:31.430255  299667 cri.go:89] found id: ""
	I1205 07:47:31.430280  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.430288  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:31.430303  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:31.430362  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:31.455188  299667 cri.go:89] found id: ""
	I1205 07:47:31.455213  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.455222  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:31.455228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:31.455304  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:31.483709  299667 cri.go:89] found id: ""
	I1205 07:47:31.483734  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.483743  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:31.483754  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:31.483841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:31.511054  299667 cri.go:89] found id: ""
	I1205 07:47:31.511081  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.511090  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:31.511096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:31.511154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:31.536168  299667 cri.go:89] found id: ""
	I1205 07:47:31.536193  299667 logs.go:282] 0 containers: []
	W1205 07:47:31.536202  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:31.536211  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:31.536222  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:31.592031  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:31.592066  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:31.606480  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:31.606506  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:31.673271  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:31.665811    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.666387    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668073    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.668549    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:31.670076    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:31.673294  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:31.673309  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:31.699030  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:31.699063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:34.230473  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:34.241086  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:34.241182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:34.266354  299667 cri.go:89] found id: ""
	I1205 07:47:34.266377  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.266386  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:34.266393  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:34.266455  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:34.295281  299667 cri.go:89] found id: ""
	I1205 07:47:34.295304  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.295313  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:34.295322  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:34.295381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:34.320096  299667 cri.go:89] found id: ""
	I1205 07:47:34.320119  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.320127  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:34.320134  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:34.320193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:34.351699  299667 cri.go:89] found id: ""
	I1205 07:47:34.351769  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.351778  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:34.351785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:34.351890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:34.384621  299667 cri.go:89] found id: ""
	I1205 07:47:34.384643  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.384651  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:34.384658  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:34.384716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:34.416183  299667 cri.go:89] found id: ""
	I1205 07:47:34.416209  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.416217  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:34.416225  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:34.416303  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:34.442818  299667 cri.go:89] found id: ""
	I1205 07:47:34.442843  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.442852  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:34.442859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:34.442926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:34.467574  299667 cri.go:89] found id: ""
	I1205 07:47:34.467600  299667 logs.go:282] 0 containers: []
	W1205 07:47:34.467608  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:34.467618  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:34.467630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:34.525566  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:34.525599  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:34.538971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:34.539003  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:34.603104  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:34.595744    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.596337    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.597846    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.598357    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:34.599865    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:34.603123  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:34.603135  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:34.627990  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:34.628024  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:37.102277  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:39.102399  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:37.156741  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:37.168917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:37.168986  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:37.194896  299667 cri.go:89] found id: ""
	I1205 07:47:37.194920  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.194929  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:37.194935  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:37.194996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:37.220279  299667 cri.go:89] found id: ""
	I1205 07:47:37.220316  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.220324  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:37.220331  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:37.220402  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:37.244728  299667 cri.go:89] found id: ""
	I1205 07:47:37.244759  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.244768  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:37.244774  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:37.244838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:37.269770  299667 cri.go:89] found id: ""
	I1205 07:47:37.269794  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.269802  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:37.269809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:37.269865  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:37.296343  299667 cri.go:89] found id: ""
	I1205 07:47:37.296367  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.296376  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:37.296382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:37.296444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:37.321553  299667 cri.go:89] found id: ""
	I1205 07:47:37.321576  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.321585  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:37.321592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:37.321651  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:37.356802  299667 cri.go:89] found id: ""
	I1205 07:47:37.356824  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.356834  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:37.356841  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:37.356901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:37.384475  299667 cri.go:89] found id: ""
	I1205 07:47:37.384497  299667 logs.go:282] 0 containers: []
	W1205 07:47:37.384505  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:37.384513  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:37.384524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:37.451184  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:37.451220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:37.465508  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:37.465535  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:37.531461  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:37.523607    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.524430    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.525986    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.526272    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:37.527788    4327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:37.531483  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:37.531495  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:37.556492  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:37.556531  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.084953  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 07:47:41.103193  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:43.602434  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:40.099166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:40.099240  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:40.129037  299667 cri.go:89] found id: ""
	I1205 07:47:40.129058  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.129066  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:40.129074  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:40.129147  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:40.166711  299667 cri.go:89] found id: ""
	I1205 07:47:40.166735  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.166743  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:40.166752  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:40.166813  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:40.192959  299667 cri.go:89] found id: ""
	I1205 07:47:40.192982  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.192991  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:40.192998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:40.193056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:40.218168  299667 cri.go:89] found id: ""
	I1205 07:47:40.218193  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.218202  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:40.218208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:40.218292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:40.243397  299667 cri.go:89] found id: ""
	I1205 07:47:40.243420  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.243428  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:40.243435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:40.243510  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:40.268685  299667 cri.go:89] found id: ""
	I1205 07:47:40.268710  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.268718  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:40.268725  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:40.268802  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:40.294417  299667 cri.go:89] found id: ""
	I1205 07:47:40.294443  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.294452  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:40.294480  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:40.294561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:40.321495  299667 cri.go:89] found id: ""
	I1205 07:47:40.321556  299667 logs.go:282] 0 containers: []
	W1205 07:47:40.321570  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:40.321580  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:40.321592  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:40.360106  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:40.360133  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:40.420594  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:40.420627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:40.437302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:40.437332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:40.503821  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:40.496104    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.496818    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498314    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.498984    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:40.500594    4454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:40.503843  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:40.503855  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.028974  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:43.039847  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:43.039922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:43.066179  299667 cri.go:89] found id: ""
	I1205 07:47:43.066202  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.066210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:43.066216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:43.066274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:43.092504  299667 cri.go:89] found id: ""
	I1205 07:47:43.092528  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.092536  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:43.092543  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:43.092610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:43.124060  299667 cri.go:89] found id: ""
	I1205 07:47:43.124086  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.124095  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:43.124102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:43.124166  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:43.154063  299667 cri.go:89] found id: ""
	I1205 07:47:43.154089  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.154098  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:43.154104  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:43.154174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:43.185231  299667 cri.go:89] found id: ""
	I1205 07:47:43.185255  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.185264  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:43.185271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:43.185334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:43.214039  299667 cri.go:89] found id: ""
	I1205 07:47:43.214113  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.214135  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:43.214153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:43.214239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:43.239645  299667 cri.go:89] found id: ""
	I1205 07:47:43.239709  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.239730  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:43.239747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:43.239836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:43.264373  299667 cri.go:89] found id: ""
	I1205 07:47:43.264437  299667 logs.go:282] 0 containers: []
	W1205 07:47:43.264458  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:43.264478  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:43.264514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:43.320427  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:43.320464  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:43.334556  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:43.334586  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:43.419578  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:43.411501    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.412048    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414019    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.414563    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:43.416202    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:43.419600  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:43.419613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:43.444937  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:43.444974  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:45.602606  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:48.102422  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:45.973125  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:45.983741  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:45.983836  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:46.021150  299667 cri.go:89] found id: ""
	I1205 07:47:46.021200  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.021208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:46.021215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:46.021296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:46.046658  299667 cri.go:89] found id: ""
	I1205 07:47:46.046688  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.046725  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:46.046732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:46.046806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:46.072039  299667 cri.go:89] found id: ""
	I1205 07:47:46.072113  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.072136  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:46.072153  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:46.072239  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:46.117323  299667 cri.go:89] found id: ""
	I1205 07:47:46.117399  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.117423  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:46.117448  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:46.117538  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:46.154886  299667 cri.go:89] found id: ""
	I1205 07:47:46.154912  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.154921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:46.154928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:46.155012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:46.181153  299667 cri.go:89] found id: ""
	I1205 07:47:46.181199  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.181208  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:46.181215  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:46.181302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:46.211244  299667 cri.go:89] found id: ""
	I1205 07:47:46.211270  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.211279  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:46.211285  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:46.211346  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:46.235089  299667 cri.go:89] found id: ""
	I1205 07:47:46.235164  299667 logs.go:282] 0 containers: []
	W1205 07:47:46.235180  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:46.235191  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:46.235203  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:46.305530  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:46.297872    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.298427    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.299571    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.300102    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:46.301756    4666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:46.305551  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:46.305563  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:46.330757  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:46.330792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:46.376750  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:46.376781  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:46.439507  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:46.439542  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:48.953904  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:48.964561  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:48.964628  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:48.987874  299667 cri.go:89] found id: ""
	I1205 07:47:48.987900  299667 logs.go:282] 0 containers: []
	W1205 07:47:48.987909  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:48.987916  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:48.987974  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:49.014890  299667 cri.go:89] found id: ""
	I1205 07:47:49.014966  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.014980  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:49.014988  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:49.015065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:49.040290  299667 cri.go:89] found id: ""
	I1205 07:47:49.040313  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.040321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:49.040328  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:49.040385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:49.065216  299667 cri.go:89] found id: ""
	I1205 07:47:49.065278  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.065287  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:49.065293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:49.065350  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:49.091916  299667 cri.go:89] found id: ""
	I1205 07:47:49.091941  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.091950  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:49.091956  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:49.092015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:49.122078  299667 cri.go:89] found id: ""
	I1205 07:47:49.122101  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.122110  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:49.122117  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:49.122174  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:49.148378  299667 cri.go:89] found id: ""
	I1205 07:47:49.148400  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.148409  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:49.148415  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:49.148474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:49.181597  299667 cri.go:89] found id: ""
	I1205 07:47:49.181623  299667 logs.go:282] 0 containers: []
	W1205 07:47:49.181639  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:49.181649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:49.181660  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:49.237429  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:49.237462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:49.252514  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:49.252540  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:49.317886  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:49.309655    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.310742    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.312479    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.313078    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:49.314578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:49.317908  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:49.317922  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:49.343471  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:49.343503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:47:50.103132  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:52.602329  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:47:51.885282  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:51.895713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:51.895806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:51.923558  299667 cri.go:89] found id: ""
	I1205 07:47:51.923582  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.923592  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:51.923599  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:51.923702  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:51.952466  299667 cri.go:89] found id: ""
	I1205 07:47:51.952490  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.952499  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:51.952506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:51.952594  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:51.977008  299667 cri.go:89] found id: ""
	I1205 07:47:51.977032  299667 logs.go:282] 0 containers: []
	W1205 07:47:51.977041  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:51.977048  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:51.977130  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:52.001855  299667 cri.go:89] found id: ""
	I1205 07:47:52.001880  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.001890  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:52.001918  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:52.002010  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:52.041299  299667 cri.go:89] found id: ""
	I1205 07:47:52.041367  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.041391  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:52.041410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:52.041490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:52.066425  299667 cri.go:89] found id: ""
	I1205 07:47:52.066448  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.066457  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:52.066484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:52.066567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:52.093389  299667 cri.go:89] found id: ""
	I1205 07:47:52.093415  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.093425  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:52.093431  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:52.093490  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:52.131379  299667 cri.go:89] found id: ""
	I1205 07:47:52.131404  299667 logs.go:282] 0 containers: []
	W1205 07:47:52.131412  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:52.131421  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:52.131432  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:52.172215  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:52.172246  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:52.232285  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:52.232317  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:52.246383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:52.246461  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:52.312938  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:52.304672    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.305506    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307336    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.307918    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:52.309694    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:52.312999  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:52.313037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:54.839218  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:54.849526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:54.849596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:54.878984  299667 cri.go:89] found id: ""
	I1205 07:47:54.879018  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.879028  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:54.879034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:54.879115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:54.903570  299667 cri.go:89] found id: ""
	I1205 07:47:54.903593  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.903603  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:54.903609  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:54.903668  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:54.928679  299667 cri.go:89] found id: ""
	I1205 07:47:54.928701  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.928710  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:54.928716  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:54.928772  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:54.957443  299667 cri.go:89] found id: ""
	I1205 07:47:54.957465  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.957474  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:54.957481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:54.957539  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:54.981997  299667 cri.go:89] found id: ""
	I1205 07:47:54.982022  299667 logs.go:282] 0 containers: []
	W1205 07:47:54.982031  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:54.982037  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:54.982097  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:55.019658  299667 cri.go:89] found id: ""
	I1205 07:47:55.019684  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.019694  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:55.019702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:55.019774  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:55.045945  299667 cri.go:89] found id: ""
	I1205 07:47:55.045968  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.045977  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:55.045982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:55.046047  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:55.070660  299667 cri.go:89] found id: ""
	I1205 07:47:55.070682  299667 logs.go:282] 0 containers: []
	W1205 07:47:55.070691  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:55.070753  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:55.070772  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:55.103139  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:57.602889  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:47:55.155877  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:55.141096    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.141661    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.150363    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.151128    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:55.152706    4994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:55.155904  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:55.155918  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:55.182506  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:55.182538  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:47:55.209519  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:55.209545  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:55.268283  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:55.268315  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:57.781956  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:47:57.792419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:47:57.792511  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:47:57.816805  299667 cri.go:89] found id: ""
	I1205 07:47:57.816830  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.816839  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:47:57.816845  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:47:57.816907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:47:57.844943  299667 cri.go:89] found id: ""
	I1205 07:47:57.844967  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.844975  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:47:57.844982  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:47:57.845041  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:47:57.869698  299667 cri.go:89] found id: ""
	I1205 07:47:57.869720  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.869728  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:47:57.869735  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:47:57.869792  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:47:57.894855  299667 cri.go:89] found id: ""
	I1205 07:47:57.894881  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.894889  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:47:57.894896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:47:57.895015  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:47:57.919181  299667 cri.go:89] found id: ""
	I1205 07:47:57.919207  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.919217  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:47:57.919223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:47:57.919284  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:47:57.947523  299667 cri.go:89] found id: ""
	I1205 07:47:57.947545  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.947553  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:47:57.947559  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:47:57.947617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:47:57.972190  299667 cri.go:89] found id: ""
	I1205 07:47:57.972212  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.972221  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:47:57.972227  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:47:57.972337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:47:57.995598  299667 cri.go:89] found id: ""
	I1205 07:47:57.995620  299667 logs.go:282] 0 containers: []
	W1205 07:47:57.995628  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:47:57.995637  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:47:57.995648  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:47:58.053180  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:47:58.053214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:47:58.066958  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:47:58.067035  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:47:58.148853  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:47:58.141452    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.142226    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.143951    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.144255    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:47:58.145689    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:47:58.148871  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:47:58.148884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:47:58.177078  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:47:58.177111  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:00.102486  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:02.602313  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:04.602418  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:00.709764  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:00.720636  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:00.720709  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:00.745332  299667 cri.go:89] found id: ""
	I1205 07:48:00.745357  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.745367  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:00.745377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:00.745446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:00.769743  299667 cri.go:89] found id: ""
	I1205 07:48:00.769766  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.769774  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:00.769780  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:00.769838  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:00.793723  299667 cri.go:89] found id: ""
	I1205 07:48:00.793747  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.793755  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:00.793761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:00.793849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:00.822270  299667 cri.go:89] found id: ""
	I1205 07:48:00.822295  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.822304  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:00.822311  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:00.822372  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:00.846055  299667 cri.go:89] found id: ""
	I1205 07:48:00.846079  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.846088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:00.846094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:00.846154  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:00.875896  299667 cri.go:89] found id: ""
	I1205 07:48:00.875927  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.875938  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:00.875945  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:00.876005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:00.901376  299667 cri.go:89] found id: ""
	I1205 07:48:00.901401  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.901410  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:00.901417  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:00.901478  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:00.931038  299667 cri.go:89] found id: ""
	I1205 07:48:00.931062  299667 logs.go:282] 0 containers: []
	W1205 07:48:00.931070  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:00.931080  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:00.931121  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:00.997183  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:00.989740    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.990455    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.991954    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.992421    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:00.994082    5219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:00.997205  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:00.997217  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:01.023514  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:01.023552  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:01.051665  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:01.051694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:01.112451  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:01.112528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:03.628641  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:03.640043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:03.640115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:03.668895  299667 cri.go:89] found id: ""
	I1205 07:48:03.668923  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.668932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:03.668939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:03.669005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:03.698851  299667 cri.go:89] found id: ""
	I1205 07:48:03.698873  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.698882  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:03.698888  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:03.698946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:03.724736  299667 cri.go:89] found id: ""
	I1205 07:48:03.724758  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.724767  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:03.724773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:03.724831  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:03.751007  299667 cri.go:89] found id: ""
	I1205 07:48:03.751030  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.751038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:03.751072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:03.751143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:03.779130  299667 cri.go:89] found id: ""
	I1205 07:48:03.779153  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.779162  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:03.779168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:03.779226  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:03.808717  299667 cri.go:89] found id: ""
	I1205 07:48:03.808738  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.808798  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:03.808812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:03.808893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:03.834648  299667 cri.go:89] found id: ""
	I1205 07:48:03.834745  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.834769  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:03.834790  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:03.834894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:03.860266  299667 cri.go:89] found id: ""
	I1205 07:48:03.860290  299667 logs.go:282] 0 containers: []
	W1205 07:48:03.860298  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:03.860307  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:03.860326  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:03.925650  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:03.917386    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.918307    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920018    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.920458    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:03.922014    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:03.925672  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:03.925684  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:03.951836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:03.951866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:03.981147  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:03.981199  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:04.037271  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:04.037308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:48:07.102890  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:09.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:06.551820  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:06.562850  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:06.562922  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:06.588022  299667 cri.go:89] found id: ""
	I1205 07:48:06.588044  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.588052  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:06.588059  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:06.588121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:06.618654  299667 cri.go:89] found id: ""
	I1205 07:48:06.618677  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.618687  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:06.618693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:06.618760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:06.654167  299667 cri.go:89] found id: ""
	I1205 07:48:06.654188  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.654197  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:06.654203  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:06.654261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:06.681234  299667 cri.go:89] found id: ""
	I1205 07:48:06.681306  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.681327  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:06.681345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:06.681437  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:06.705922  299667 cri.go:89] found id: ""
	I1205 07:48:06.705946  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.705955  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:06.705962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:06.706044  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:06.730881  299667 cri.go:89] found id: ""
	I1205 07:48:06.730913  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.730924  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:06.730930  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:06.730987  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:06.755636  299667 cri.go:89] found id: ""
	I1205 07:48:06.755661  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.755670  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:06.755676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:06.755743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:06.780702  299667 cri.go:89] found id: ""
	I1205 07:48:06.780735  299667 logs.go:282] 0 containers: []
	W1205 07:48:06.780743  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:06.780753  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:06.780764  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:06.841265  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:06.841303  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:06.854661  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:06.854686  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:06.918298  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:06.910651    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.911119    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912277    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.912727    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:06.914309    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:06.918316  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:06.918328  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:06.943239  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:06.943274  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.471658  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:09.482526  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:09.482598  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:09.507658  299667 cri.go:89] found id: ""
	I1205 07:48:09.507683  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.507692  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:09.507699  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:09.507765  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:09.538688  299667 cri.go:89] found id: ""
	I1205 07:48:09.538744  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.538758  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:09.538765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:09.538835  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:09.564016  299667 cri.go:89] found id: ""
	I1205 07:48:09.564041  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.564050  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:09.564056  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:09.564118  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:09.595020  299667 cri.go:89] found id: ""
	I1205 07:48:09.595047  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.595056  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:09.595062  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:09.595170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:09.627725  299667 cri.go:89] found id: ""
	I1205 07:48:09.627747  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.627756  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:09.627763  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:09.627821  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:09.661208  299667 cri.go:89] found id: ""
	I1205 07:48:09.661273  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.661290  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:09.661297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:09.661371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:09.686173  299667 cri.go:89] found id: ""
	I1205 07:48:09.686207  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.686216  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:09.686223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:09.686291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:09.710385  299667 cri.go:89] found id: ""
	I1205 07:48:09.710417  299667 logs.go:282] 0 containers: []
	W1205 07:48:09.710426  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:09.710435  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:09.710447  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:09.724065  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:09.724089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:09.786352  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:09.779403    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.780102    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781556    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.781957    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:09.783406    5559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:09.786371  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:09.786383  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:09.814782  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:09.814823  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:09.845678  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:09.845705  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:11.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:14.102692  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:12.403586  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:12.414137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:12.414208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:12.443644  299667 cri.go:89] found id: ""
	I1205 07:48:12.443666  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.443677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:12.443683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:12.443743  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:12.468970  299667 cri.go:89] found id: ""
	I1205 07:48:12.468992  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.469001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:12.469007  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:12.469073  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:12.495420  299667 cri.go:89] found id: ""
	I1205 07:48:12.495441  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.495449  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:12.495455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:12.495513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:12.520821  299667 cri.go:89] found id: ""
	I1205 07:48:12.520848  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.520857  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:12.520862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:12.520920  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:12.546738  299667 cri.go:89] found id: ""
	I1205 07:48:12.546767  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.546776  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:12.546782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:12.546845  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:12.571663  299667 cri.go:89] found id: ""
	I1205 07:48:12.571687  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.571696  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:12.571702  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:12.571759  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:12.600237  299667 cri.go:89] found id: ""
	I1205 07:48:12.600263  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.600272  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:12.600279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:12.600336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:12.645073  299667 cri.go:89] found id: ""
	I1205 07:48:12.645108  299667 logs.go:282] 0 containers: []
	W1205 07:48:12.645116  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:12.645126  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:12.645137  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:12.661987  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:12.662020  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:12.726418  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:12.719047    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.719450    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.720924    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.721357    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:12.723128    5668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:12.726442  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:12.726455  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:12.751208  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:12.751243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:12.780690  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:12.780718  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:16.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:18.602693  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:15.336959  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:15.349150  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:15.349233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:15.379055  299667 cri.go:89] found id: ""
	I1205 07:48:15.379075  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.379084  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:15.379090  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:15.379148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:15.411812  299667 cri.go:89] found id: ""
	I1205 07:48:15.411832  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.411841  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:15.411849  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:15.411907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:15.436056  299667 cri.go:89] found id: ""
	I1205 07:48:15.436077  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.436085  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:15.436091  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:15.436152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:15.461323  299667 cri.go:89] found id: ""
	I1205 07:48:15.461345  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.461354  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:15.461360  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:15.461416  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:15.490552  299667 cri.go:89] found id: ""
	I1205 07:48:15.490577  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.490586  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:15.490593  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:15.490682  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:15.519448  299667 cri.go:89] found id: ""
	I1205 07:48:15.519471  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.519480  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:15.519487  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:15.519544  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:15.548923  299667 cri.go:89] found id: ""
	I1205 07:48:15.548947  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.548956  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:15.548962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:15.549024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:15.574804  299667 cri.go:89] found id: ""
	I1205 07:48:15.574828  299667 logs.go:282] 0 containers: []
	W1205 07:48:15.574839  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:15.574847  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:15.574878  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:15.634392  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:15.634428  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:15.651971  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:15.651998  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:15.719384  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:15.712311    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.712792    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714194    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.714692    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:15.716166    5785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:15.719407  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:15.719418  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:15.743909  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:15.743941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.273819  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:18.284902  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:18.284975  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:18.310770  299667 cri.go:89] found id: ""
	I1205 07:48:18.310793  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.310802  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:18.310809  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:18.310868  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:18.335509  299667 cri.go:89] found id: ""
	I1205 07:48:18.335530  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.335538  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:18.335544  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:18.335602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:18.367849  299667 cri.go:89] found id: ""
	I1205 07:48:18.367875  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.367884  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:18.367890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:18.367947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:18.397008  299667 cri.go:89] found id: ""
	I1205 07:48:18.397037  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.397046  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:18.397053  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:18.397115  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:18.422994  299667 cri.go:89] found id: ""
	I1205 07:48:18.423017  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.423035  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:18.423043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:18.423109  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:18.447590  299667 cri.go:89] found id: ""
	I1205 07:48:18.447666  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.447689  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:18.447713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:18.447801  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:18.472279  299667 cri.go:89] found id: ""
	I1205 07:48:18.472353  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.472375  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:18.472392  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:18.472477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:18.497432  299667 cri.go:89] found id: ""
	I1205 07:48:18.497454  299667 logs.go:282] 0 containers: []
	W1205 07:48:18.497463  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:18.497471  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:18.497484  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:18.522163  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:18.522196  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:18.550354  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:18.550378  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:18.605871  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:18.605944  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:18.623406  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:18.623435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:18.692830  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:18.684718    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.685292    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.686860    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.687404    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:18.689043    5913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:20.603254  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:23.103214  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:21.193117  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:21.203367  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:21.203430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:21.228233  299667 cri.go:89] found id: ""
	I1205 07:48:21.228257  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.228265  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:21.228272  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:21.228331  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:21.256427  299667 cri.go:89] found id: ""
	I1205 07:48:21.256448  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.256456  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:21.256462  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:21.256523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:21.281113  299667 cri.go:89] found id: ""
	I1205 07:48:21.281136  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.281145  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:21.281151  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:21.281238  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:21.305777  299667 cri.go:89] found id: ""
	I1205 07:48:21.305798  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.305806  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:21.305812  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:21.305869  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:21.335558  299667 cri.go:89] found id: ""
	I1205 07:48:21.335622  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.335645  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:21.335662  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:21.335745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:21.374161  299667 cri.go:89] found id: ""
	I1205 07:48:21.374230  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.374257  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:21.374275  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:21.374358  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:21.403378  299667 cri.go:89] found id: ""
	I1205 07:48:21.403442  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.403464  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:21.403481  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:21.403561  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:21.428681  299667 cri.go:89] found id: ""
	I1205 07:48:21.428707  299667 logs.go:282] 0 containers: []
	W1205 07:48:21.428717  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:21.428725  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:21.428736  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:21.485472  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:21.485503  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:21.499440  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:21.499521  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:21.564057  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:21.556176    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.556823    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558351    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.558883    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:21.560592    6014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:21.564088  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:21.564102  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:21.588591  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:21.588627  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.133263  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:24.145210  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:24.145292  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:24.172487  299667 cri.go:89] found id: ""
	I1205 07:48:24.172509  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.172517  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:24.172523  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:24.172582  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:24.197589  299667 cri.go:89] found id: ""
	I1205 07:48:24.197612  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.197634  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:24.197641  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:24.197727  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:24.232698  299667 cri.go:89] found id: ""
	I1205 07:48:24.232773  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.232803  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:24.232821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:24.232927  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:24.261831  299667 cri.go:89] found id: ""
	I1205 07:48:24.261854  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.261863  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:24.261870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:24.261932  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:24.290390  299667 cri.go:89] found id: ""
	I1205 07:48:24.290412  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.290420  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:24.290426  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:24.290486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:24.314257  299667 cri.go:89] found id: ""
	I1205 07:48:24.314327  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.314360  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:24.314383  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:24.314475  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:24.338446  299667 cri.go:89] found id: ""
	I1205 07:48:24.338469  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.338477  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:24.338484  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:24.338542  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:24.366265  299667 cri.go:89] found id: ""
	I1205 07:48:24.366302  299667 logs.go:282] 0 containers: []
	W1205 07:48:24.366314  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:24.366323  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:24.366335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:24.398722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:24.398759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:24.430842  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:24.430872  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:24.486913  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:24.486947  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:24.500309  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:24.500333  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:24.571107  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:24.563214    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.564079    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.565734    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.566239    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:24.567834    6137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:25.602309  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:28.102336  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:27.072799  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:27.082983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:27.083049  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:27.106973  299667 cri.go:89] found id: ""
	I1205 07:48:27.106997  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.107005  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:27.107012  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:27.107072  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:27.131580  299667 cri.go:89] found id: ""
	I1205 07:48:27.131604  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.131613  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:27.131619  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:27.131679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:27.156330  299667 cri.go:89] found id: ""
	I1205 07:48:27.156356  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.156364  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:27.156371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:27.156434  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:27.180350  299667 cri.go:89] found id: ""
	I1205 07:48:27.180375  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.180384  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:27.180391  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:27.180449  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:27.204756  299667 cri.go:89] found id: ""
	I1205 07:48:27.204779  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.204787  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:27.204800  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:27.204858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:27.232181  299667 cri.go:89] found id: ""
	I1205 07:48:27.232207  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.232216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:27.232223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:27.232299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:27.258059  299667 cri.go:89] found id: ""
	I1205 07:48:27.258086  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.258095  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:27.258102  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:27.258165  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:27.281695  299667 cri.go:89] found id: ""
	I1205 07:48:27.281717  299667 logs.go:282] 0 containers: []
	W1205 07:48:27.281725  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:27.281734  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:27.281746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:27.294855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:27.294880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:27.362846  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:27.354959    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.355648    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.357415    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.358049    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:27.359574    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:27.362868  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:27.362880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:27.389761  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:27.389791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:27.422138  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:27.422165  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:29.980506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:29.990724  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:29.990791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:30.035211  299667 cri.go:89] found id: ""
	I1205 07:48:30.035238  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.035248  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:30.035256  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:30.035326  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:30.063908  299667 cri.go:89] found id: ""
	I1205 07:48:30.063944  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.063953  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:30.063960  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:30.064034  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	W1205 07:48:30.103232  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:32.602298  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:34.602332  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:30.095785  299667 cri.go:89] found id: ""
	I1205 07:48:30.095860  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.095883  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:30.095908  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:30.096002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:30.123133  299667 cri.go:89] found id: ""
	I1205 07:48:30.123156  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.123166  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:30.123172  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:30.123235  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:30.149862  299667 cri.go:89] found id: ""
	I1205 07:48:30.149885  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.149894  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:30.149901  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:30.150013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:30.175817  299667 cri.go:89] found id: ""
	I1205 07:48:30.175883  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.175903  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:30.175920  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:30.176005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:30.201607  299667 cri.go:89] found id: ""
	I1205 07:48:30.201631  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.201640  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:30.201646  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:30.201711  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:30.227899  299667 cri.go:89] found id: ""
	I1205 07:48:30.227922  299667 logs.go:282] 0 containers: []
	W1205 07:48:30.227931  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:30.227940  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:30.227952  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:30.241708  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:30.241742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:30.309566  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:30.302822    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.303506    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.304959    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.305407    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:30.306598    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:30.309584  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:30.309597  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:30.334740  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:30.334771  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:30.378494  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:30.378524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:32.939968  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:32.950759  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:32.950832  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:32.978406  299667 cri.go:89] found id: ""
	I1205 07:48:32.978430  299667 logs.go:282] 0 containers: []
	W1205 07:48:32.978438  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:32.978454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:32.978513  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:33.008532  299667 cri.go:89] found id: ""
	I1205 07:48:33.008559  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.008568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:33.008574  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:33.008650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:33.033972  299667 cri.go:89] found id: ""
	I1205 07:48:33.033997  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.034005  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:33.034013  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:33.034081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:33.059992  299667 cri.go:89] found id: ""
	I1205 07:48:33.060014  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.060023  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:33.060029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:33.060094  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:33.090354  299667 cri.go:89] found id: ""
	I1205 07:48:33.090379  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.090387  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:33.090395  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:33.090454  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:33.114706  299667 cri.go:89] found id: ""
	I1205 07:48:33.114735  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.114744  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:33.114751  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:33.114809  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:33.140456  299667 cri.go:89] found id: ""
	I1205 07:48:33.140481  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.140490  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:33.140496  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:33.140557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:33.169438  299667 cri.go:89] found id: ""
	I1205 07:48:33.169461  299667 logs.go:282] 0 containers: []
	W1205 07:48:33.169469  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:33.169478  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:33.169490  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:33.195155  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:33.195189  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:33.221590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:33.221617  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:33.277078  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:33.277110  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:33.290419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:33.290445  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:33.357621  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:33.348953    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.349496    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351268    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.351797    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:33.353684    6476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:48:36.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:38.602933  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:35.857840  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:35.869455  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:35.869525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:35.904563  299667 cri.go:89] found id: ""
	I1205 07:48:35.904585  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.904594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:35.904601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:35.904664  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:35.932592  299667 cri.go:89] found id: ""
	I1205 07:48:35.932613  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.932622  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:35.932628  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:35.932690  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:35.961011  299667 cri.go:89] found id: ""
	I1205 07:48:35.961033  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.961048  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:35.961055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:35.961121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:35.988109  299667 cri.go:89] found id: ""
	I1205 07:48:35.988131  299667 logs.go:282] 0 containers: []
	W1205 07:48:35.988139  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:35.988146  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:35.988212  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:36.021866  299667 cri.go:89] found id: ""
	I1205 07:48:36.021894  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.021903  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:36.021910  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:36.021980  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:36.053675  299667 cri.go:89] found id: ""
	I1205 07:48:36.053697  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.053706  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:36.053713  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:36.053773  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:36.088227  299667 cri.go:89] found id: ""
	I1205 07:48:36.088252  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.088261  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:36.088268  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:36.088330  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:36.114723  299667 cri.go:89] found id: ""
	I1205 07:48:36.114753  299667 logs.go:282] 0 containers: []
	W1205 07:48:36.114762  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:36.114772  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:36.114792  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:36.130077  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:36.130105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:36.199710  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:36.192608    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.193190    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.194885    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.195428    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:36.196487    6576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:36.199733  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:36.199746  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:36.224920  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:36.224953  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:36.260346  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:36.260373  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:38.818746  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:38.829029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:38.829103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:38.861723  299667 cri.go:89] found id: ""
	I1205 07:48:38.861746  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.861755  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:38.861761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:38.861827  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:38.889749  299667 cri.go:89] found id: ""
	I1205 07:48:38.889772  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.889781  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:38.889787  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:38.889849  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:38.925308  299667 cri.go:89] found id: ""
	I1205 07:48:38.925337  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.925346  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:38.925352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:38.925412  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:38.955710  299667 cri.go:89] found id: ""
	I1205 07:48:38.955732  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.955740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:38.955746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:38.955803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:38.980907  299667 cri.go:89] found id: ""
	I1205 07:48:38.980934  299667 logs.go:282] 0 containers: []
	W1205 07:48:38.980943  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:38.980951  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:38.981013  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:39.011368  299667 cri.go:89] found id: ""
	I1205 07:48:39.011398  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.011409  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:39.011416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:39.011489  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:39.037693  299667 cri.go:89] found id: ""
	I1205 07:48:39.037719  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.037727  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:39.037734  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:39.037806  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:39.063915  299667 cri.go:89] found id: ""
	I1205 07:48:39.063940  299667 logs.go:282] 0 containers: []
	W1205 07:48:39.063949  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:39.063957  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:39.063969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:39.120923  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:39.120960  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:39.134276  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:39.134302  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:39.194044  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:39.186820    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.187356    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189023    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.189537    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:39.191075    6689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:39.194064  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:39.194076  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:39.218536  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:39.218569  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:41.102495  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:43.102732  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:41.747231  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:41.758180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:41.758258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:41.785400  299667 cri.go:89] found id: ""
	I1205 07:48:41.785426  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.785435  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:41.785442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:41.785509  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:41.817641  299667 cri.go:89] found id: ""
	I1205 07:48:41.817667  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.817676  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:41.817683  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:41.817747  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:41.842820  299667 cri.go:89] found id: ""
	I1205 07:48:41.842846  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.842855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:41.842869  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:41.842933  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:41.880166  299667 cri.go:89] found id: ""
	I1205 07:48:41.880194  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.880208  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:41.880214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:41.880291  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:41.911193  299667 cri.go:89] found id: ""
	I1205 07:48:41.911258  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.911273  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:41.911281  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:41.911337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:41.935720  299667 cri.go:89] found id: ""
	I1205 07:48:41.935745  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.935754  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:41.935761  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:41.935823  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:41.962907  299667 cri.go:89] found id: ""
	I1205 07:48:41.962976  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.962992  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:41.962998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:41.963065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:41.991087  299667 cri.go:89] found id: ""
	I1205 07:48:41.991113  299667 logs.go:282] 0 containers: []
	W1205 07:48:41.991121  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:41.991130  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:41.991140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:42.070025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:42.070073  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:42.086499  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:42.086528  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:42.164053  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:42.154053    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.155038    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.157154    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.158061    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:42.159164    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:42.164130  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:42.164162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:42.192298  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:42.192342  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:44.734604  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:44.745356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:44.745423  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:44.770206  299667 cri.go:89] found id: ""
	I1205 07:48:44.770230  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.770239  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:44.770247  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:44.770305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:44.796086  299667 cri.go:89] found id: ""
	I1205 07:48:44.796109  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.796118  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:44.796124  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:44.796182  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:44.822053  299667 cri.go:89] found id: ""
	I1205 07:48:44.822125  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.822148  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:44.822167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:44.822258  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:44.855227  299667 cri.go:89] found id: ""
	I1205 07:48:44.855298  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.855320  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:44.855339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:44.855422  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:44.884787  299667 cri.go:89] found id: ""
	I1205 07:48:44.884859  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.885835  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:44.885875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:44.885967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:44.922015  299667 cri.go:89] found id: ""
	I1205 07:48:44.922040  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.922048  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:44.922055  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:44.922120  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:44.946942  299667 cri.go:89] found id: ""
	I1205 07:48:44.946979  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.946988  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:44.946995  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:44.947056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:44.972229  299667 cri.go:89] found id: ""
	I1205 07:48:44.972253  299667 logs.go:282] 0 containers: []
	W1205 07:48:44.972262  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:44.972270  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:44.972280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:44.997401  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:44.997434  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:45.054576  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:45.054602  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:48:45.102947  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:47.602661  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:45.133742  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:45.133782  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:45.155399  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:45.155496  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:45.257582  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:45.247772    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.248903    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.250325    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.251981    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:45.252973    6924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:47.759254  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:47.770034  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:47.770107  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:47.799850  299667 cri.go:89] found id: ""
	I1205 07:48:47.799873  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.799882  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:47.799889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:47.799947  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:47.824989  299667 cri.go:89] found id: ""
	I1205 07:48:47.825014  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.825022  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:47.825028  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:47.825089  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:47.857967  299667 cri.go:89] found id: ""
	I1205 07:48:47.857993  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.858002  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:47.858008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:47.858065  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:47.890800  299667 cri.go:89] found id: ""
	I1205 07:48:47.890833  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.890842  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:47.890851  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:47.890911  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:47.921850  299667 cri.go:89] found id: ""
	I1205 07:48:47.921874  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.921883  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:47.921890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:47.921950  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:47.946404  299667 cri.go:89] found id: ""
	I1205 07:48:47.946426  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.946435  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:47.946442  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:47.946501  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:47.972095  299667 cri.go:89] found id: ""
	I1205 07:48:47.972117  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.972125  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:47.972131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:47.972189  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:47.996555  299667 cri.go:89] found id: ""
	I1205 07:48:47.996577  299667 logs.go:282] 0 containers: []
	W1205 07:48:47.996585  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:47.996594  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:47.996605  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:48.054087  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:48.054122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:48.069006  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:48.069038  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:48.132946  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:48.125080    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.125744    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127306    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.127870    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:48.129636    7024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:48.132968  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:48.132981  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:48.158949  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:48.158986  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:48:50.102346  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:52.103160  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:54.602949  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:50.687838  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:50.698642  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:50.698712  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:50.725092  299667 cri.go:89] found id: ""
	I1205 07:48:50.725113  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.725121  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:50.725128  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:50.725208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:50.750131  299667 cri.go:89] found id: ""
	I1205 07:48:50.750153  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.750161  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:50.750167  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:50.750233  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:50.774733  299667 cri.go:89] found id: ""
	I1205 07:48:50.774755  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.774765  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:50.774773  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:50.774858  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:50.803492  299667 cri.go:89] found id: ""
	I1205 07:48:50.803514  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.803524  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:50.803531  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:50.803596  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:50.828915  299667 cri.go:89] found id: ""
	I1205 07:48:50.828938  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.828947  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:50.828953  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:50.829022  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:50.862065  299667 cri.go:89] found id: ""
	I1205 07:48:50.862090  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.862098  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:50.862105  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:50.862168  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:50.888327  299667 cri.go:89] found id: ""
	I1205 07:48:50.888356  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.888365  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:50.888371  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:50.888432  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:50.917551  299667 cri.go:89] found id: ""
	I1205 07:48:50.917583  299667 logs.go:282] 0 containers: []
	W1205 07:48:50.917592  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:50.917601  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:50.917613  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:50.976691  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:50.976725  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:50.990259  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:50.990285  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:51.057592  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:51.049623    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.050384    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052057    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.052381    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:51.053915    7133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:51.057614  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:51.057628  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:51.088874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:51.088916  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.619589  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:53.630457  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:53.630521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:53.662396  299667 cri.go:89] found id: ""
	I1205 07:48:53.662420  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.662429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:53.662435  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:53.662493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:53.687365  299667 cri.go:89] found id: ""
	I1205 07:48:53.687393  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.687402  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:53.687408  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:53.687469  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:53.711757  299667 cri.go:89] found id: ""
	I1205 07:48:53.711782  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.711791  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:53.711798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:53.711893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:53.735695  299667 cri.go:89] found id: ""
	I1205 07:48:53.735721  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.735730  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:53.735736  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:53.735793  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:53.763008  299667 cri.go:89] found id: ""
	I1205 07:48:53.763032  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.763041  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:53.763047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:53.763104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:53.791424  299667 cri.go:89] found id: ""
	I1205 07:48:53.791498  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.791520  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:53.791537  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:53.791617  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:53.815855  299667 cri.go:89] found id: ""
	I1205 07:48:53.815876  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.815884  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:53.815890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:53.815946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:53.839524  299667 cri.go:89] found id: ""
	I1205 07:48:53.839548  299667 logs.go:282] 0 containers: []
	W1205 07:48:53.839557  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:53.839565  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:53.839577  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:53.884515  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:53.884591  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:53.947646  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:53.947682  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:53.961152  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:53.961211  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:54.031297  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:54.022908    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.023707    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.025654    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.026313    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:54.027946    7255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:54.031321  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:54.031335  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:48:57.102570  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:48:59.102902  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:48:56.557021  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:56.567576  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:56.567694  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:56.596257  299667 cri.go:89] found id: ""
	I1205 07:48:56.596291  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.596300  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:56.596306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:56.596381  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:56.627549  299667 cri.go:89] found id: ""
	I1205 07:48:56.627575  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.627583  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:56.627590  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:56.627649  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:56.661291  299667 cri.go:89] found id: ""
	I1205 07:48:56.661313  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.661321  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:56.661332  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:56.661391  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:56.687435  299667 cri.go:89] found id: ""
	I1205 07:48:56.687462  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.687471  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:56.687477  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:56.687540  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:56.712238  299667 cri.go:89] found id: ""
	I1205 07:48:56.712261  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.712271  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:56.712277  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:56.712340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:56.736638  299667 cri.go:89] found id: ""
	I1205 07:48:56.736663  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.736672  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:56.736690  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:56.736748  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:56.760967  299667 cri.go:89] found id: ""
	I1205 07:48:56.761001  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.761010  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:56.761016  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:56.761075  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:56.784912  299667 cri.go:89] found id: ""
	I1205 07:48:56.784939  299667 logs.go:282] 0 containers: []
	W1205 07:48:56.784947  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:56.784958  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:56.784969  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:56.808701  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:56.808734  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:48:56.835856  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:56.835884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:56.896082  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:56.896154  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:56.914235  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:56.914310  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:56.981742  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:56.973992    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.974661    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976256    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.976839    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:56.978611    7371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.483411  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:48:59.494080  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:48:59.494149  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:48:59.521983  299667 cri.go:89] found id: ""
	I1205 07:48:59.522007  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.522015  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:48:59.522023  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:48:59.522081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:48:59.547605  299667 cri.go:89] found id: ""
	I1205 07:48:59.547637  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.547646  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:48:59.547652  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:48:59.547718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:48:59.572816  299667 cri.go:89] found id: ""
	I1205 07:48:59.572839  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.572847  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:48:59.572854  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:48:59.572909  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:48:59.598049  299667 cri.go:89] found id: ""
	I1205 07:48:59.598070  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.598078  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:48:59.598085  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:48:59.598145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:48:59.624907  299667 cri.go:89] found id: ""
	I1205 07:48:59.624928  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.624937  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:48:59.624943  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:48:59.625001  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:48:59.651926  299667 cri.go:89] found id: ""
	I1205 07:48:59.651947  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.651955  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:48:59.651962  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:48:59.652019  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:48:59.680003  299667 cri.go:89] found id: ""
	I1205 07:48:59.680080  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.680103  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:48:59.680120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:48:59.680228  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:48:59.705437  299667 cri.go:89] found id: ""
	I1205 07:48:59.705465  299667 logs.go:282] 0 containers: []
	W1205 07:48:59.705474  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:48:59.705483  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:48:59.705493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:48:59.763111  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:48:59.763142  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:48:59.777300  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:48:59.777368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:48:59.842575  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:48:59.834367    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835192    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.835964    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837639    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:48:59.837998    7471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:48:59.842643  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:48:59.842663  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:48:59.869833  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:48:59.869908  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:01.602955  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:04.102698  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:02.402084  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:02.412782  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:02.412851  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:02.438256  299667 cri.go:89] found id: ""
	I1205 07:49:02.438279  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.438287  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:02.438294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:02.438352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:02.465899  299667 cri.go:89] found id: ""
	I1205 07:49:02.465926  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.465935  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:02.465942  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:02.466005  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:02.490481  299667 cri.go:89] found id: ""
	I1205 07:49:02.490503  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.490513  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:02.490519  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:02.490586  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:02.516169  299667 cri.go:89] found id: ""
	I1205 07:49:02.516196  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.516205  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:02.516211  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:02.516271  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:02.541403  299667 cri.go:89] found id: ""
	I1205 07:49:02.541429  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.541439  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:02.541445  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:02.541507  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:02.566995  299667 cri.go:89] found id: ""
	I1205 07:49:02.567017  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.567025  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:02.567032  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:02.567099  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:02.597621  299667 cri.go:89] found id: ""
	I1205 07:49:02.597644  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.597652  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:02.597657  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:02.597716  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:02.628924  299667 cri.go:89] found id: ""
	I1205 07:49:02.628951  299667 logs.go:282] 0 containers: []
	W1205 07:49:02.628960  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:02.628969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:02.628980  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:02.693315  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:02.693348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:02.707066  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:02.707162  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:02.771707  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:02.763790    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.764476    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766199    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.766818    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:02.768451    7584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:02.771729  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:02.771742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:02.797113  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:02.797145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:06.603033  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:09.102351  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:05.326530  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:05.336990  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:05.337057  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:05.360427  299667 cri.go:89] found id: ""
	I1205 07:49:05.360451  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.360460  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:05.360466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:05.360525  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:05.384196  299667 cri.go:89] found id: ""
	I1205 07:49:05.384222  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.384230  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:05.384237  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:05.384299  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:05.410321  299667 cri.go:89] found id: ""
	I1205 07:49:05.410344  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.410352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:05.410358  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:05.410417  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:05.433726  299667 cri.go:89] found id: ""
	I1205 07:49:05.433793  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.433815  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:05.433833  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:05.433921  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:05.458853  299667 cri.go:89] found id: ""
	I1205 07:49:05.458924  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.458940  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:05.458947  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:05.459008  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:05.482445  299667 cri.go:89] found id: ""
	I1205 07:49:05.482514  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.482529  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:05.482538  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:05.482610  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:05.507192  299667 cri.go:89] found id: ""
	I1205 07:49:05.507260  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.507282  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:05.507300  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:05.507393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:05.532405  299667 cri.go:89] found id: ""
	I1205 07:49:05.532439  299667 logs.go:282] 0 containers: []
	W1205 07:49:05.532448  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:05.532459  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:05.532470  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:05.587713  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:05.587744  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:05.600994  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:05.601062  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:05.676675  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:05.668840    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.669298    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671085    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.671648    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:05.673511    7696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:05.676745  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:05.676770  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:05.700917  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:05.700948  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.230743  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:08.241254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:08.241324  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:08.265687  299667 cri.go:89] found id: ""
	I1205 07:49:08.265765  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.265781  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:08.265789  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:08.265873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:08.291182  299667 cri.go:89] found id: ""
	I1205 07:49:08.291212  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.291222  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:08.291230  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:08.291288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:08.316404  299667 cri.go:89] found id: ""
	I1205 07:49:08.316431  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.316439  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:08.316446  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:08.316503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:08.342004  299667 cri.go:89] found id: ""
	I1205 07:49:08.342030  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.342038  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:08.342044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:08.342103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:08.370679  299667 cri.go:89] found id: ""
	I1205 07:49:08.370700  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.370708  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:08.370715  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:08.370791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:08.398788  299667 cri.go:89] found id: ""
	I1205 07:49:08.398848  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.398880  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:08.398896  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:08.398967  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:08.427499  299667 cri.go:89] found id: ""
	I1205 07:49:08.427532  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.427552  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:08.427560  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:08.427627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:08.455982  299667 cri.go:89] found id: ""
	I1205 07:49:08.456008  299667 logs.go:282] 0 containers: []
	W1205 07:49:08.456016  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:08.456025  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:08.456037  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:08.469660  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:08.469687  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:08.534660  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:08.526566    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.527489    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529404    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.529879    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:08.531389    7800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:08.534684  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:08.534697  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:08.560195  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:08.560228  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:08.590035  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:08.590061  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:49:11.102705  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:13.103312  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:11.150392  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:11.161108  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:11.161194  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:11.185243  299667 cri.go:89] found id: ""
	I1205 07:49:11.185264  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.185273  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:11.185280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:11.185338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:11.208758  299667 cri.go:89] found id: ""
	I1205 07:49:11.208797  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.208806  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:11.208815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:11.208884  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:11.235054  299667 cri.go:89] found id: ""
	I1205 07:49:11.235077  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.235086  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:11.235092  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:11.235157  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:11.259045  299667 cri.go:89] found id: ""
	I1205 07:49:11.259068  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.259076  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:11.259082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:11.259143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:11.288257  299667 cri.go:89] found id: ""
	I1205 07:49:11.288282  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.288291  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:11.288298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:11.288354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:11.312884  299667 cri.go:89] found id: ""
	I1205 07:49:11.312906  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.312914  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:11.312922  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:11.312978  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:11.341317  299667 cri.go:89] found id: ""
	I1205 07:49:11.341340  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.341348  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:11.341354  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:11.341411  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:11.365207  299667 cri.go:89] found id: ""
	I1205 07:49:11.365234  299667 logs.go:282] 0 containers: []
	W1205 07:49:11.365243  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:11.365260  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:11.365271  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:11.423587  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:11.423619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:11.437723  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:11.437796  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:11.504822  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:11.496976    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498180    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.498834    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.499952    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:11.500563    7914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:11.504896  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:11.504935  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:11.529753  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:11.529791  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:14.059148  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:14.069586  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:14.069676  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:14.103804  299667 cri.go:89] found id: ""
	I1205 07:49:14.103828  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.103837  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:14.103843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:14.103901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:14.135010  299667 cri.go:89] found id: ""
	I1205 07:49:14.135031  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.135040  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:14.135045  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:14.135104  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:14.170829  299667 cri.go:89] found id: ""
	I1205 07:49:14.170851  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.170859  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:14.170865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:14.170926  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:14.199693  299667 cri.go:89] found id: ""
	I1205 07:49:14.199715  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.199724  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:14.199730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:14.199789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:14.223902  299667 cri.go:89] found id: ""
	I1205 07:49:14.223924  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.223931  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:14.223937  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:14.224003  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:14.247854  299667 cri.go:89] found id: ""
	I1205 07:49:14.247926  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.247950  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:14.247969  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:14.248063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:14.272146  299667 cri.go:89] found id: ""
	I1205 07:49:14.272219  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.272250  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:14.272270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:14.272375  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:14.297307  299667 cri.go:89] found id: ""
	I1205 07:49:14.297377  299667 logs.go:282] 0 containers: []
	W1205 07:49:14.297404  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:14.297421  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:14.297436  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:14.352148  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:14.352181  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:14.365391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:14.365420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:14.429045  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:14.421762    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.422258    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.423906    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.424340    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:14.425860    8027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:14.429068  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:14.429080  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:14.453460  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:14.453494  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:15.602762  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:17.602959  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:16.984086  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:16.994499  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:16.994567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:17.022900  299667 cri.go:89] found id: ""
	I1205 07:49:17.022923  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.022932  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:17.022939  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:17.022997  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:17.047244  299667 cri.go:89] found id: ""
	I1205 07:49:17.047318  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.047332  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:17.047339  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:17.047415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:17.070683  299667 cri.go:89] found id: ""
	I1205 07:49:17.070716  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.070725  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:17.070732  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:17.070811  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:17.104238  299667 cri.go:89] found id: ""
	I1205 07:49:17.104310  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.104332  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:17.104351  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:17.104433  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:17.130787  299667 cri.go:89] found id: ""
	I1205 07:49:17.130867  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.130890  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:17.130907  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:17.131014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:17.159177  299667 cri.go:89] found id: ""
	I1205 07:49:17.159212  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.159221  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:17.159228  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:17.159293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:17.187127  299667 cri.go:89] found id: ""
	I1205 07:49:17.187148  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.187157  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:17.187168  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:17.187225  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:17.214608  299667 cri.go:89] found id: ""
	I1205 07:49:17.214633  299667 logs.go:282] 0 containers: []
	W1205 07:49:17.214641  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:17.214650  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:17.214690  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:17.227937  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:17.227964  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:17.290517  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:17.282537    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.283384    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.284988    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.285553    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:17.287144    8137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:17.290581  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:17.290600  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:17.315039  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:17.315074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:17.343285  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:17.343348  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:19.899406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:19.910597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:19.910679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:19.935640  299667 cri.go:89] found id: ""
	I1205 07:49:19.935664  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.935673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:19.935679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:19.935736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:19.959309  299667 cri.go:89] found id: ""
	I1205 07:49:19.959336  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.959345  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:19.959352  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:19.959418  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:19.982862  299667 cri.go:89] found id: ""
	I1205 07:49:19.982884  299667 logs.go:282] 0 containers: []
	W1205 07:49:19.982893  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:19.982899  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:19.982957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:20.016784  299667 cri.go:89] found id: ""
	I1205 07:49:20.016810  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.016819  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:20.016826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:20.016893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:20.044555  299667 cri.go:89] found id: ""
	I1205 07:49:20.044580  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.044590  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:20.044597  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:20.044657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:20.080570  299667 cri.go:89] found id: ""
	I1205 07:49:20.080595  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.080603  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:20.080610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:20.080689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1205 07:49:20.102423  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:22.102493  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:24.602330  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:20.112802  299667 cri.go:89] found id: ""
	I1205 07:49:20.112829  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.112838  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:20.112852  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:20.112912  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:20.145614  299667 cri.go:89] found id: ""
	I1205 07:49:20.145642  299667 logs.go:282] 0 containers: []
	W1205 07:49:20.145650  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:20.145659  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:20.145670  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:20.208200  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:20.208233  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:20.222391  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:20.222422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:20.285471  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:20.277971    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.278773    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280374    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.280701    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:20.282159    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:20.285500  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:20.285513  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:20.311384  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:20.311415  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:22.840933  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:22.854843  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:22.854939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:22.881572  299667 cri.go:89] found id: ""
	I1205 07:49:22.881598  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.881608  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:22.881614  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:22.881677  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:22.917647  299667 cri.go:89] found id: ""
	I1205 07:49:22.917677  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.917686  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:22.917692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:22.917750  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:22.943325  299667 cri.go:89] found id: ""
	I1205 07:49:22.943346  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.943355  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:22.943362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:22.943426  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:22.967894  299667 cri.go:89] found id: ""
	I1205 07:49:22.967955  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.967979  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:22.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:22.968076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:22.994911  299667 cri.go:89] found id: ""
	I1205 07:49:22.994976  299667 logs.go:282] 0 containers: []
	W1205 07:49:22.994991  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:22.994998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:22.995056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:23.022399  299667 cri.go:89] found id: ""
	I1205 07:49:23.022464  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.022486  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:23.022506  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:23.022581  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:23.048262  299667 cri.go:89] found id: ""
	I1205 07:49:23.048283  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.048291  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:23.048297  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:23.048355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:23.072655  299667 cri.go:89] found id: ""
	I1205 07:49:23.072684  299667 logs.go:282] 0 containers: []
	W1205 07:49:23.072694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:23.072702  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:23.072720  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:23.132711  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:23.132742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:23.146553  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:23.146576  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:23.218207  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:23.211095    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.211677    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213270    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.213717    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:23.214890    8363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:23.218230  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:23.218243  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:23.242426  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:23.242462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:27.102316  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:29.602939  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:25.772926  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:25.783467  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:25.783546  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:25.811044  299667 cri.go:89] found id: ""
	I1205 07:49:25.811066  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.811075  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:25.811081  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:25.811139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:25.835534  299667 cri.go:89] found id: ""
	I1205 07:49:25.835558  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.835568  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:25.835575  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:25.835637  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:25.866938  299667 cri.go:89] found id: ""
	I1205 07:49:25.866966  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.866974  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:25.866981  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:25.867043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:25.897273  299667 cri.go:89] found id: ""
	I1205 07:49:25.897302  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.897313  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:25.897320  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:25.897380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:25.923461  299667 cri.go:89] found id: ""
	I1205 07:49:25.923489  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.923497  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:25.923504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:25.923590  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:25.946791  299667 cri.go:89] found id: ""
	I1205 07:49:25.946813  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.946822  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:25.946828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:25.946885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:25.971479  299667 cri.go:89] found id: ""
	I1205 07:49:25.971507  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.971515  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:25.971521  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:25.971580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:25.994965  299667 cri.go:89] found id: ""
	I1205 07:49:25.994986  299667 logs.go:282] 0 containers: []
	W1205 07:49:25.994994  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:25.995003  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:25.995014  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:26.058667  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:26.058701  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:26.073089  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:26.073119  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:26.150334  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:26.142683    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.143534    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145294    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.145607    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:26.147094    8475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:26.150355  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:26.150367  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:26.182077  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:26.182109  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:28.710700  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:28.722142  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:28.722208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:28.749003  299667 cri.go:89] found id: ""
	I1205 07:49:28.749029  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.749037  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:28.749044  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:28.749101  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:28.774112  299667 cri.go:89] found id: ""
	I1205 07:49:28.774141  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.774152  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:28.774158  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:28.774215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:28.797966  299667 cri.go:89] found id: ""
	I1205 07:49:28.797987  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.797996  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:28.798002  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:28.798058  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:28.825668  299667 cri.go:89] found id: ""
	I1205 07:49:28.825694  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.825703  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:28.825709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:28.825788  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:28.856952  299667 cri.go:89] found id: ""
	I1205 07:49:28.856986  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.857001  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:28.857008  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:28.857091  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:28.882695  299667 cri.go:89] found id: ""
	I1205 07:49:28.882730  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.882746  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:28.882753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:28.882822  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:28.909550  299667 cri.go:89] found id: ""
	I1205 07:49:28.909584  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.909594  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:28.909601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:28.909671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:28.942251  299667 cri.go:89] found id: ""
	I1205 07:49:28.942319  299667 logs.go:282] 0 containers: []
	W1205 07:49:28.942340  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:28.942362  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:28.942387  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:29.005506  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:28.997373    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.997796    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:28.999616    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.000051    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:29.001543    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:29.005539  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:29.005554  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:29.030880  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:29.030910  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:29.058353  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:29.058381  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:29.121228  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:29.121304  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:49:32.102320  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:34.103275  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:31.636506  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:31.647234  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:31.647305  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:31.672508  299667 cri.go:89] found id: ""
	I1205 07:49:31.672530  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.672539  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:31.672545  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:31.672603  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:31.696860  299667 cri.go:89] found id: ""
	I1205 07:49:31.696885  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.696894  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:31.696900  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:31.696970  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:31.722649  299667 cri.go:89] found id: ""
	I1205 07:49:31.722676  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.722685  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:31.722692  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:31.722770  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:31.748068  299667 cri.go:89] found id: ""
	I1205 07:49:31.748093  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.748101  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:31.748109  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:31.748169  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:31.773290  299667 cri.go:89] found id: ""
	I1205 07:49:31.773315  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.773324  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:31.773330  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:31.773393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:31.804425  299667 cri.go:89] found id: ""
	I1205 07:49:31.804445  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.804454  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:31.804461  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:31.804521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:31.829116  299667 cri.go:89] found id: ""
	I1205 07:49:31.829137  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.829146  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:31.829152  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:31.829241  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:31.867330  299667 cri.go:89] found id: ""
	I1205 07:49:31.867406  299667 logs.go:282] 0 containers: []
	W1205 07:49:31.867418  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:31.867427  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:31.867438  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:31.931647  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:31.931680  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:31.945211  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:31.945236  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:32.004694  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:31.996314    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.997135    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.998769    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:31.999377    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:32.000929    8702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:32.004719  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:32.004738  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:32.031538  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:32.031572  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:34.562576  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:34.573366  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:34.573477  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:34.599238  299667 cri.go:89] found id: ""
	I1205 07:49:34.599262  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.599272  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:34.599279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:34.599342  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:34.624561  299667 cri.go:89] found id: ""
	I1205 07:49:34.624589  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.624598  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:34.624604  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:34.624666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:34.649603  299667 cri.go:89] found id: ""
	I1205 07:49:34.649624  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.649637  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:34.649644  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:34.649707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:34.674019  299667 cri.go:89] found id: ""
	I1205 07:49:34.674043  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.674052  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:34.674058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:34.674121  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:34.700890  299667 cri.go:89] found id: ""
	I1205 07:49:34.700912  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.700921  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:34.700928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:34.700988  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:34.727454  299667 cri.go:89] found id: ""
	I1205 07:49:34.727482  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.727491  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:34.727498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:34.727558  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:34.753086  299667 cri.go:89] found id: ""
	I1205 07:49:34.753107  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.753115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:34.753120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:34.753208  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:34.779077  299667 cri.go:89] found id: ""
	I1205 07:49:34.779100  299667 logs.go:282] 0 containers: []
	W1205 07:49:34.779109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:34.779118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:34.779129  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:34.839330  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:34.839368  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:34.857129  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:34.857175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:34.932420  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:34.925080    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.925635    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927078    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.927405    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:34.928805    8815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:34.932440  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:34.932452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:34.957616  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:34.957649  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:36.602677  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:39.102319  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:37.486529  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:37.496909  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:37.496977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:37.521254  299667 cri.go:89] found id: ""
	I1205 07:49:37.521315  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.521349  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:37.521372  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:37.521462  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:37.544759  299667 cri.go:89] found id: ""
	I1205 07:49:37.544782  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.544791  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:37.544798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:37.544854  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:37.569519  299667 cri.go:89] found id: ""
	I1205 07:49:37.569549  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.569558  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:37.569564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:37.569624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:37.593917  299667 cri.go:89] found id: ""
	I1205 07:49:37.593938  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.593947  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:37.593954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:37.594014  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:37.619915  299667 cri.go:89] found id: ""
	I1205 07:49:37.619940  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.619949  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:37.619955  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:37.620016  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:37.647160  299667 cri.go:89] found id: ""
	I1205 07:49:37.647186  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.647195  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:37.647202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:37.647261  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:37.672076  299667 cri.go:89] found id: ""
	I1205 07:49:37.672097  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.672105  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:37.672111  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:37.672170  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:37.697550  299667 cri.go:89] found id: ""
	I1205 07:49:37.697573  299667 logs.go:282] 0 containers: []
	W1205 07:49:37.697581  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:37.697590  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:37.697601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:37.754073  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:37.754105  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:37.769043  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:37.769071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:37.831338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:37.823147    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.823873    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.824729    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826277    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:37.826806    8924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:37.831359  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:37.831371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:37.857528  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:37.857564  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:41.602800  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:44.102845  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:40.404513  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:40.415071  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:40.415143  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:40.439261  299667 cri.go:89] found id: ""
	I1205 07:49:40.439283  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.439291  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:40.439298  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:40.439355  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:40.464063  299667 cri.go:89] found id: ""
	I1205 07:49:40.464084  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.464092  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:40.464098  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:40.464158  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:40.490322  299667 cri.go:89] found id: ""
	I1205 07:49:40.490344  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.490352  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:40.490359  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:40.490419  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:40.517055  299667 cri.go:89] found id: ""
	I1205 07:49:40.517078  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.517087  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:40.517093  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:40.517151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:40.545250  299667 cri.go:89] found id: ""
	I1205 07:49:40.545273  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.545282  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:40.545288  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:40.545348  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:40.569118  299667 cri.go:89] found id: ""
	I1205 07:49:40.569142  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.569151  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:40.569188  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:40.569248  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:40.593152  299667 cri.go:89] found id: ""
	I1205 07:49:40.593209  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.593217  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:40.593223  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:40.593287  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:40.617285  299667 cri.go:89] found id: ""
	I1205 07:49:40.617308  299667 logs.go:282] 0 containers: []
	W1205 07:49:40.617316  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:40.617325  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:40.617336  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:40.681518  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:40.674019    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.675010    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.676471    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.677202    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:40.678348    9031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:40.681540  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:40.681553  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:40.707309  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:40.707347  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:40.740118  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:40.740145  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:40.798971  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:40.799001  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.313313  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:43.324257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:43.324337  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:43.356730  299667 cri.go:89] found id: ""
	I1205 07:49:43.356755  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.356763  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:43.356770  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:43.356828  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:43.386071  299667 cri.go:89] found id: ""
	I1205 07:49:43.386097  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.386106  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:43.386112  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:43.386172  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:43.415579  299667 cri.go:89] found id: ""
	I1205 07:49:43.415606  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.415615  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:43.415621  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:43.415679  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:43.441039  299667 cri.go:89] found id: ""
	I1205 07:49:43.441064  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.441075  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:43.441082  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:43.441141  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:43.466399  299667 cri.go:89] found id: ""
	I1205 07:49:43.466432  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.466442  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:43.466449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:43.466519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:43.497264  299667 cri.go:89] found id: ""
	I1205 07:49:43.497309  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.497319  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:43.497326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:43.497397  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:43.522221  299667 cri.go:89] found id: ""
	I1205 07:49:43.522247  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.522256  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:43.522262  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:43.522325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:43.546887  299667 cri.go:89] found id: ""
	I1205 07:49:43.546953  299667 logs.go:282] 0 containers: []
	W1205 07:49:43.546969  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:43.546980  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:43.546992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:43.613596  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:43.613644  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:43.628794  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:43.628825  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:43.698835  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:43.691146    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.691658    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693108    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.693562    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:43.695245    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:43.698854  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:43.698866  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:43.725776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:43.725811  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:49:46.103222  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:48.603225  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:46.256365  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:46.267583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:46.267659  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:46.296652  299667 cri.go:89] found id: ""
	I1205 07:49:46.296679  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.296687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:46.296694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:46.296760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:46.323489  299667 cri.go:89] found id: ""
	I1205 07:49:46.323514  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.323522  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:46.323529  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:46.323593  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:46.355225  299667 cri.go:89] found id: ""
	I1205 07:49:46.355249  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.355258  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:46.355265  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:46.355340  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:46.383644  299667 cri.go:89] found id: ""
	I1205 07:49:46.383678  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.383687  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:46.383694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:46.383768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:46.421484  299667 cri.go:89] found id: ""
	I1205 07:49:46.421518  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.421527  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:46.421533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:46.421602  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:46.447032  299667 cri.go:89] found id: ""
	I1205 07:49:46.447057  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.447066  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:46.447073  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:46.447136  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:46.472839  299667 cri.go:89] found id: ""
	I1205 07:49:46.472860  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.472867  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:46.472873  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:46.472930  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:46.501395  299667 cri.go:89] found id: ""
	I1205 07:49:46.501422  299667 logs.go:282] 0 containers: []
	W1205 07:49:46.501432  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:46.501441  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:46.501452  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:46.558146  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:46.558178  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:46.573118  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:46.573146  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:46.637720  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:46.629529    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.630263    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.631961    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.632519    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:46.634266    9261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:46.637741  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:46.637754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:46.662623  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:46.662658  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.193341  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:49.204485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:49.204616  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:49.235316  299667 cri.go:89] found id: ""
	I1205 07:49:49.235380  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.235403  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:49.235424  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:49.235503  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:49.259781  299667 cri.go:89] found id: ""
	I1205 07:49:49.259811  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.259820  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:49.259826  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:49.259894  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:49.283985  299667 cri.go:89] found id: ""
	I1205 07:49:49.284025  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.284034  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:49.284041  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:49.284123  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:49.312614  299667 cri.go:89] found id: ""
	I1205 07:49:49.312643  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.312652  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:49.312659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:49.312728  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:49.338339  299667 cri.go:89] found id: ""
	I1205 07:49:49.338362  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.338371  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:49.338378  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:49.338444  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:49.367532  299667 cri.go:89] found id: ""
	I1205 07:49:49.367557  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.367565  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:49.367572  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:49.367635  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:49.401925  299667 cri.go:89] found id: ""
	I1205 07:49:49.402000  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.402020  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:49.402038  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:49.402122  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:49.428942  299667 cri.go:89] found id: ""
	I1205 07:49:49.428975  299667 logs.go:282] 0 containers: []
	W1205 07:49:49.428993  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:49.429003  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:49.429021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:49.492403  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:49.483297    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.483766    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.485282    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.486704    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:49.487334    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:49.492426  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:49.492439  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:49.517991  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:49.518021  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:49.545729  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:49.545754  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:49.601110  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:49.601140  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:49:51.102462  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:53.103333  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:52.115102  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:52.128449  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:52.128522  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:52.158550  299667 cri.go:89] found id: ""
	I1205 07:49:52.158575  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.158584  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:52.158591  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:52.158654  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:52.183729  299667 cri.go:89] found id: ""
	I1205 07:49:52.183750  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.183759  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:52.183765  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:52.183829  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:52.209241  299667 cri.go:89] found id: ""
	I1205 07:49:52.209269  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.209279  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:52.209286  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:52.209367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:52.234457  299667 cri.go:89] found id: ""
	I1205 07:49:52.234488  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.234497  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:52.234504  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:52.234568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:52.258774  299667 cri.go:89] found id: ""
	I1205 07:49:52.258799  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.258808  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:52.258815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:52.258904  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:52.284285  299667 cri.go:89] found id: ""
	I1205 07:49:52.284319  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.284329  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:52.284336  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:52.284406  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:52.311443  299667 cri.go:89] found id: ""
	I1205 07:49:52.311470  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.311479  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:52.311485  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:52.311577  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:52.335827  299667 cri.go:89] found id: ""
	I1205 07:49:52.335859  299667 logs.go:282] 0 containers: []
	W1205 07:49:52.335868  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:52.335879  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:52.335890  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:52.395851  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:52.395889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:52.410419  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:52.410446  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:52.478966  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:52.470906    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.471739    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.473522    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.474227    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:52.475784    9485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:52.478997  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:52.479010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:52.504082  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:52.504114  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.031406  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:55.042458  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:55.042534  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:55.066642  299667 cri.go:89] found id: ""
	I1205 07:49:55.066667  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.066677  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:55.066684  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:55.066746  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1205 07:49:55.602712  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:49:58.102265  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:49:55.091150  299667 cri.go:89] found id: ""
	I1205 07:49:55.091180  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.091189  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:55.091195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:55.091255  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:55.121930  299667 cri.go:89] found id: ""
	I1205 07:49:55.121951  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.121960  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:55.121965  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:55.122023  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:55.149981  299667 cri.go:89] found id: ""
	I1205 07:49:55.150058  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.150079  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:55.150097  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:55.150184  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:55.173681  299667 cri.go:89] found id: ""
	I1205 07:49:55.173704  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.173712  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:55.173718  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:55.173777  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:55.197308  299667 cri.go:89] found id: ""
	I1205 07:49:55.197332  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.197341  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:55.197347  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:55.197403  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:55.223472  299667 cri.go:89] found id: ""
	I1205 07:49:55.223493  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.223502  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:55.223508  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:55.223572  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:55.252432  299667 cri.go:89] found id: ""
	I1205 07:49:55.252457  299667 logs.go:282] 0 containers: []
	W1205 07:49:55.252466  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:55.252474  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:55.252487  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:55.318488  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:55.309713    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.310343    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.311952    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.312478    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:55.314634    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:55.318520  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:55.318533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:55.343511  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:55.343587  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:49:55.386735  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:55.386818  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:55.452457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:55.452497  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:57.966172  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:49:57.976919  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:49:57.976991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:49:58.003394  299667 cri.go:89] found id: ""
	I1205 07:49:58.003420  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.003429  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:49:58.003436  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:49:58.003505  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:49:58.040382  299667 cri.go:89] found id: ""
	I1205 07:49:58.040403  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.040411  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:49:58.040425  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:49:58.040486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:49:58.066131  299667 cri.go:89] found id: ""
	I1205 07:49:58.066161  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.066170  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:49:58.066177  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:49:58.066236  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:49:58.092126  299667 cri.go:89] found id: ""
	I1205 07:49:58.092149  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.092157  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:49:58.092164  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:49:58.092224  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:49:58.123111  299667 cri.go:89] found id: ""
	I1205 07:49:58.123138  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.123147  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:49:58.123154  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:49:58.123215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:49:58.155898  299667 cri.go:89] found id: ""
	I1205 07:49:58.155920  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.155929  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:49:58.155936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:49:58.156002  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:49:58.181658  299667 cri.go:89] found id: ""
	I1205 07:49:58.181684  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.181694  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:49:58.181700  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:49:58.181760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:49:58.211071  299667 cri.go:89] found id: ""
	I1205 07:49:58.211093  299667 logs.go:282] 0 containers: []
	W1205 07:49:58.211102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:49:58.211111  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:49:58.211122  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:49:58.271505  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:49:58.271551  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:49:58.287071  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:49:58.287097  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:49:58.357627  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:49:58.347372    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.348464    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.349454    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.351189    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:49:58.352908    9713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:49:58.357680  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:49:58.357694  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:49:58.388703  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:49:58.388747  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:00.103169  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:02.602855  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:04.603343  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:00.928058  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:00.939115  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:00.939186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:00.967955  299667 cri.go:89] found id: ""
	I1205 07:50:00.967979  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.967989  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:00.967996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:00.968054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:00.994981  299667 cri.go:89] found id: ""
	I1205 07:50:00.995006  299667 logs.go:282] 0 containers: []
	W1205 07:50:00.995014  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:00.995022  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:00.995081  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:01.020388  299667 cri.go:89] found id: ""
	I1205 07:50:01.020412  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.020421  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:01.020427  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:01.020487  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:01.045771  299667 cri.go:89] found id: ""
	I1205 07:50:01.045796  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.045816  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:01.045839  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:01.045915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:01.072970  299667 cri.go:89] found id: ""
	I1205 07:50:01.072995  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.073004  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:01.073009  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:01.073069  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:01.110343  299667 cri.go:89] found id: ""
	I1205 07:50:01.110365  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.110374  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:01.110382  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:01.110442  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:01.143588  299667 cri.go:89] found id: ""
	I1205 07:50:01.143627  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.143669  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:01.143676  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:01.143734  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:01.173718  299667 cri.go:89] found id: ""
	I1205 07:50:01.173744  299667 logs.go:282] 0 containers: []
	W1205 07:50:01.173753  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:01.173762  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:01.173775  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:01.240437  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:01.231586    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.232341    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.233947    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.234256    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:01.236538    9817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:01.240461  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:01.240475  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:01.265849  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:01.265884  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:01.295649  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:01.295676  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:01.352457  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:01.352493  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:03.872935  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:03.884137  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:03.884213  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:03.909107  299667 cri.go:89] found id: ""
	I1205 07:50:03.909129  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.909138  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:03.909144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:03.909231  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:03.935188  299667 cri.go:89] found id: ""
	I1205 07:50:03.935217  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.935229  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:03.935235  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:03.935293  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:03.960991  299667 cri.go:89] found id: ""
	I1205 07:50:03.961013  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.961023  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:03.961029  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:03.961087  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:03.993563  299667 cri.go:89] found id: ""
	I1205 07:50:03.993586  299667 logs.go:282] 0 containers: []
	W1205 07:50:03.993595  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:03.993602  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:03.993658  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:04.022615  299667 cri.go:89] found id: ""
	I1205 07:50:04.022640  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.022650  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:04.022656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:04.022744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:04.052044  299667 cri.go:89] found id: ""
	I1205 07:50:04.052067  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.052076  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:04.052083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:04.052155  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:04.077688  299667 cri.go:89] found id: ""
	I1205 07:50:04.077766  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.077790  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:04.077798  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:04.077873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:04.108745  299667 cri.go:89] found id: ""
	I1205 07:50:04.108772  299667 logs.go:282] 0 containers: []
	W1205 07:50:04.108781  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:04.108790  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:04.108806  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:04.124370  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:04.124398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:04.202708  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:04.194627    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.195266    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197057    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.197747    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:04.199395    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:04.202730  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:04.202742  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:04.228486  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:04.228522  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:04.257187  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:04.257214  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:07.102231  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:09.102419  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:06.817489  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:06.828313  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:06.828385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:06.852373  299667 cri.go:89] found id: ""
	I1205 07:50:06.852445  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.852468  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:06.852489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:06.852557  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:06.877263  299667 cri.go:89] found id: ""
	I1205 07:50:06.877291  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.877300  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:06.877306  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:06.877373  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:06.902856  299667 cri.go:89] found id: ""
	I1205 07:50:06.902882  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.902892  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:06.902898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:06.902962  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:06.928569  299667 cri.go:89] found id: ""
	I1205 07:50:06.928595  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.928604  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:06.928611  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:06.928689  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:06.953448  299667 cri.go:89] found id: ""
	I1205 07:50:06.953481  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.953491  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:06.953498  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:06.953567  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:06.978486  299667 cri.go:89] found id: ""
	I1205 07:50:06.978557  299667 logs.go:282] 0 containers: []
	W1205 07:50:06.978579  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:06.978592  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:06.978653  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:07.004116  299667 cri.go:89] found id: ""
	I1205 07:50:07.004201  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.004245  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:07.004278  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:07.004369  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:07.030912  299667 cri.go:89] found id: ""
	I1205 07:50:07.030946  299667 logs.go:282] 0 containers: []
	W1205 07:50:07.030956  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:07.030966  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:07.030995  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:07.087669  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:07.087703  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:07.102364  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:07.102424  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:07.175733  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:07.168142   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.168577   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170150   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.170636   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:07.172323   10053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:07.175756  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:07.175768  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:07.201087  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:07.201120  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.733660  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:09.744254  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:09.744322  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:09.768703  299667 cri.go:89] found id: ""
	I1205 07:50:09.768725  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.768733  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:09.768740  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:09.768803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:09.792862  299667 cri.go:89] found id: ""
	I1205 07:50:09.792884  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.792892  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:09.792898  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:09.792953  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:09.816998  299667 cri.go:89] found id: ""
	I1205 07:50:09.817020  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.817028  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:09.817042  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:09.817098  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:09.846103  299667 cri.go:89] found id: ""
	I1205 07:50:09.846128  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.846137  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:09.846144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:09.846215  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:09.869920  299667 cri.go:89] found id: ""
	I1205 07:50:09.869943  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.869952  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:09.869958  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:09.870017  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:09.894186  299667 cri.go:89] found id: ""
	I1205 07:50:09.894207  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.894216  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:09.894222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:09.894279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:09.918290  299667 cri.go:89] found id: ""
	I1205 07:50:09.918323  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.918332  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:09.918338  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:09.918404  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:09.942213  299667 cri.go:89] found id: ""
	I1205 07:50:09.942241  299667 logs.go:282] 0 containers: []
	W1205 07:50:09.942250  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:09.942260  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:09.942300  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:09.971801  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:09.971827  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:10.027693  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:10.027732  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:10.042067  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:10.042095  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:11.102920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:13.602347  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:10.106137  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:10.097491   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.098155   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100028   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.100813   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:10.102539   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:10.106162  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:10.106175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.633673  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:12.645469  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:12.645547  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:12.676971  299667 cri.go:89] found id: ""
	I1205 07:50:12.676997  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.677007  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:12.677014  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:12.677084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:12.702338  299667 cri.go:89] found id: ""
	I1205 07:50:12.702361  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.702370  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:12.702377  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:12.702436  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:12.726932  299667 cri.go:89] found id: ""
	I1205 07:50:12.726958  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.726968  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:12.726974  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:12.727054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:12.752194  299667 cri.go:89] found id: ""
	I1205 07:50:12.752231  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.752240  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:12.752246  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:12.752354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:12.777805  299667 cri.go:89] found id: ""
	I1205 07:50:12.777874  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.777897  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:12.777917  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:12.777990  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:12.802215  299667 cri.go:89] found id: ""
	I1205 07:50:12.802240  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.802250  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:12.802257  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:12.802334  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:12.831796  299667 cri.go:89] found id: ""
	I1205 07:50:12.831821  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.831830  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:12.831836  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:12.831899  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:12.856886  299667 cri.go:89] found id: ""
	I1205 07:50:12.856912  299667 logs.go:282] 0 containers: []
	W1205 07:50:12.856921  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:12.856930  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:12.856941  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:12.870323  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:12.870352  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:12.933303  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:12.925114   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.925946   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927547   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.927857   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:12.929518   10273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:12.933325  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:12.933339  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:12.958156  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:12.958191  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:12.986132  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:12.986158  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:15.602727  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:17.602807  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:15.543265  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:15.553756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:15.553824  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:15.579618  299667 cri.go:89] found id: ""
	I1205 07:50:15.579641  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.579650  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:15.579656  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:15.579719  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:15.615622  299667 cri.go:89] found id: ""
	I1205 07:50:15.615646  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.615654  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:15.615660  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:15.615718  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:15.648566  299667 cri.go:89] found id: ""
	I1205 07:50:15.648595  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.648604  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:15.648610  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:15.648669  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:15.678106  299667 cri.go:89] found id: ""
	I1205 07:50:15.678132  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.678141  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:15.678147  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:15.678210  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:15.703125  299667 cri.go:89] found id: ""
	I1205 07:50:15.703148  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.703157  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:15.703163  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:15.703229  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:15.727847  299667 cri.go:89] found id: ""
	I1205 07:50:15.727873  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.727882  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:15.727889  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:15.727948  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:15.755105  299667 cri.go:89] found id: ""
	I1205 07:50:15.755129  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.755138  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:15.755144  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:15.755203  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:15.780309  299667 cri.go:89] found id: ""
	I1205 07:50:15.780334  299667 logs.go:282] 0 containers: []
	W1205 07:50:15.780343  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:15.780351  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:15.780362  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:15.836755  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:15.836788  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:15.850164  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:15.850241  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:15.913792  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:15.906315   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.906858   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908390   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.908956   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:15.910572   10385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:15.913812  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:15.913828  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:15.938310  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:15.938344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.465299  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:18.475870  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:18.475939  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:18.501780  299667 cri.go:89] found id: ""
	I1205 07:50:18.501806  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.501821  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:18.501828  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:18.501886  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:18.526890  299667 cri.go:89] found id: ""
	I1205 07:50:18.526920  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.526929  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:18.526936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:18.526996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:18.552506  299667 cri.go:89] found id: ""
	I1205 07:50:18.552531  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.552540  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:18.552546  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:18.552605  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:18.577492  299667 cri.go:89] found id: ""
	I1205 07:50:18.577517  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.577526  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:18.577533  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:18.577591  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:18.609705  299667 cri.go:89] found id: ""
	I1205 07:50:18.609731  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.609740  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:18.609746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:18.609804  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:18.637216  299667 cri.go:89] found id: ""
	I1205 07:50:18.637242  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.637251  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:18.637258  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:18.637315  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:18.663025  299667 cri.go:89] found id: ""
	I1205 07:50:18.663051  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.663060  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:18.663067  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:18.663145  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:18.689022  299667 cri.go:89] found id: ""
	I1205 07:50:18.689086  299667 logs.go:282] 0 containers: []
	W1205 07:50:18.689109  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:18.689131  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:18.689192  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:18.703250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:18.703279  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:18.768192  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:18.760123   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.760870   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.762614   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.763280   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:18.764959   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:18.768211  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:18.768223  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:18.793554  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:18.793585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:18.828893  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:18.828920  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 07:50:20.102540  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:22.602506  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:24.602962  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:21.385309  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:21.397376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:21.397451  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:21.424618  299667 cri.go:89] found id: ""
	I1205 07:50:21.424642  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.424652  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:21.424659  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:21.424717  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:21.451181  299667 cri.go:89] found id: ""
	I1205 07:50:21.451202  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.451211  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:21.451217  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:21.451275  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:21.475206  299667 cri.go:89] found id: ""
	I1205 07:50:21.475228  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.475237  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:21.475243  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:21.475300  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:21.505637  299667 cri.go:89] found id: ""
	I1205 07:50:21.505663  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.505672  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:21.505679  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:21.505738  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:21.534466  299667 cri.go:89] found id: ""
	I1205 07:50:21.534541  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.534557  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:21.534579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:21.534644  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:21.560428  299667 cri.go:89] found id: ""
	I1205 07:50:21.560453  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.560462  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:21.560472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:21.560530  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:21.584825  299667 cri.go:89] found id: ""
	I1205 07:50:21.584852  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.584860  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:21.584867  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:21.584934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:21.623066  299667 cri.go:89] found id: ""
	I1205 07:50:21.623093  299667 logs.go:282] 0 containers: []
	W1205 07:50:21.623102  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:21.623112  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:21.623127  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:21.687398  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:21.687435  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:21.702122  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:21.702149  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:21.767031  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:21.759339   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.759801   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.761524   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.762229   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:21.763794   10603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:21.767050  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:21.767063  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:21.791862  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:21.791895  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.321349  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:24.331708  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:24.331778  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:24.369231  299667 cri.go:89] found id: ""
	I1205 07:50:24.369255  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.369264  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:24.369270  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:24.369345  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:24.397058  299667 cri.go:89] found id: ""
	I1205 07:50:24.397078  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.397088  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:24.397094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:24.397152  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:24.425233  299667 cri.go:89] found id: ""
	I1205 07:50:24.425256  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.425264  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:24.425271  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:24.425325  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:24.451011  299667 cri.go:89] found id: ""
	I1205 07:50:24.451032  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.451041  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:24.451047  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:24.451103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:24.475249  299667 cri.go:89] found id: ""
	I1205 07:50:24.475278  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.475287  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:24.475294  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:24.475352  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:24.500860  299667 cri.go:89] found id: ""
	I1205 07:50:24.500885  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.500895  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:24.500911  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:24.500969  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:24.525728  299667 cri.go:89] found id: ""
	I1205 07:50:24.525751  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.525771  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:24.525778  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:24.525839  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:24.549854  299667 cri.go:89] found id: ""
	I1205 07:50:24.549877  299667 logs.go:282] 0 containers: []
	W1205 07:50:24.549885  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:24.549894  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:24.549923  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:24.574340  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:24.574371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:24.609821  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:24.609850  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:24.668879  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:24.668917  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:24.683025  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:24.683052  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:24.745503  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:24.737458   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.738079   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.739641   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.740318   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:24.741883   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:27.102442  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:29.102897  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:27.247317  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:27.258551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:27.258627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:27.282556  299667 cri.go:89] found id: ""
	I1205 07:50:27.282584  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.282594  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:27.282601  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:27.282685  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:27.311566  299667 cri.go:89] found id: ""
	I1205 07:50:27.311593  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.311602  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:27.311608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:27.311666  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:27.336201  299667 cri.go:89] found id: ""
	I1205 07:50:27.336226  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.336235  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:27.336241  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:27.336295  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:27.374655  299667 cri.go:89] found id: ""
	I1205 07:50:27.374733  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.374756  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:27.374804  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:27.374881  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:27.403358  299667 cri.go:89] found id: ""
	I1205 07:50:27.403381  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.403390  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:27.403396  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:27.403453  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:27.434322  299667 cri.go:89] found id: ""
	I1205 07:50:27.434347  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.434355  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:27.434362  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:27.434430  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:27.458621  299667 cri.go:89] found id: ""
	I1205 07:50:27.458643  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.458651  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:27.458669  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:27.458726  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:27.487490  299667 cri.go:89] found id: ""
	I1205 07:50:27.487514  299667 logs.go:282] 0 containers: []
	W1205 07:50:27.487524  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:27.487532  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:27.487543  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:27.515434  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:27.515462  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:27.574832  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:27.574864  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:27.588186  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:27.588210  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:27.666339  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:27.659477   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.659839   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661371   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.661652   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:27.663195   10845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:27.666400  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:27.666420  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:50:31.602443  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:34.102266  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:30.192057  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:30.203579  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:30.203657  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:30.233613  299667 cri.go:89] found id: ""
	I1205 07:50:30.233663  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.233673  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:30.233680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:30.233739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:30.262491  299667 cri.go:89] found id: ""
	I1205 07:50:30.262517  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.262526  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:30.262532  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:30.262599  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:30.292006  299667 cri.go:89] found id: ""
	I1205 07:50:30.292031  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.292042  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:30.292078  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:30.292134  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:30.317938  299667 cri.go:89] found id: ""
	I1205 07:50:30.317963  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.317972  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:30.317979  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:30.318037  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:30.359844  299667 cri.go:89] found id: ""
	I1205 07:50:30.359871  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.359880  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:30.359887  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:30.359946  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:30.391160  299667 cri.go:89] found id: ""
	I1205 07:50:30.391187  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.391196  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:30.391202  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:30.391256  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:30.424091  299667 cri.go:89] found id: ""
	I1205 07:50:30.424116  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.424124  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:30.424131  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:30.424186  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:30.449137  299667 cri.go:89] found id: ""
	I1205 07:50:30.449184  299667 logs.go:282] 0 containers: []
	W1205 07:50:30.449193  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:30.449204  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:30.449216  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:30.477964  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:30.477990  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:30.535174  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:30.535208  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:30.548511  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:30.548537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:30.611856  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:30.604616   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.605304   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.606823   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.607132   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:30.608556   10957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:30.611880  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:30.611892  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.137527  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:33.148376  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:33.148457  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:33.173779  299667 cri.go:89] found id: ""
	I1205 07:50:33.173802  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.173810  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:33.173816  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:33.173893  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:33.198637  299667 cri.go:89] found id: ""
	I1205 07:50:33.198661  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.198671  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:33.198678  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:33.198739  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:33.227950  299667 cri.go:89] found id: ""
	I1205 07:50:33.227972  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.227980  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:33.227986  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:33.228056  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:33.252400  299667 cri.go:89] found id: ""
	I1205 07:50:33.252434  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.252446  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:33.252454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:33.252528  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:33.277287  299667 cri.go:89] found id: ""
	I1205 07:50:33.277311  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.277320  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:33.277326  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:33.277384  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:33.303260  299667 cri.go:89] found id: ""
	I1205 07:50:33.303285  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.303294  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:33.303310  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:33.303387  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:33.327837  299667 cri.go:89] found id: ""
	I1205 07:50:33.327860  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.327868  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:33.327875  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:33.327934  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:33.361138  299667 cri.go:89] found id: ""
	I1205 07:50:33.361196  299667 logs.go:282] 0 containers: []
	W1205 07:50:33.361206  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:33.361216  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:33.361227  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:33.439490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:33.439534  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:33.454134  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:33.454201  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:33.519248  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:33.511412   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.512311   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513153   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.513918   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:33.514631   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:33.519324  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:33.519346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:33.544362  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:33.544404  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:36.102706  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:38.602248  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:36.073913  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:36.085180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:36.085254  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:36.111524  299667 cri.go:89] found id: ""
	I1205 07:50:36.111549  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.111558  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:36.111565  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:36.111624  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:36.136758  299667 cri.go:89] found id: ""
	I1205 07:50:36.136832  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.136856  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:36.136874  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:36.136999  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:36.170081  299667 cri.go:89] found id: ""
	I1205 07:50:36.170105  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.170113  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:36.170120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:36.170177  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:36.194713  299667 cri.go:89] found id: ""
	I1205 07:50:36.194738  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.194747  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:36.194753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:36.194817  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:36.219168  299667 cri.go:89] found id: ""
	I1205 07:50:36.219190  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.219199  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:36.219205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:36.219272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:36.243582  299667 cri.go:89] found id: ""
	I1205 07:50:36.243653  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.243676  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:36.243694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:36.243775  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:36.268659  299667 cri.go:89] found id: ""
	I1205 07:50:36.268730  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.268754  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:36.268771  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:36.268853  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:36.293268  299667 cri.go:89] found id: ""
	I1205 07:50:36.293338  299667 logs.go:282] 0 containers: []
	W1205 07:50:36.293361  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:36.293383  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:36.293416  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:36.372932  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:36.354016   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.354781   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.355815   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.356400   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:36.369451   11161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:36.372960  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:36.372972  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:36.400267  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:36.400358  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:36.432348  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:36.432371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:36.488499  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:36.488533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.002493  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:39.016301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:39.016371  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:39.041723  299667 cri.go:89] found id: ""
	I1205 07:50:39.041799  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.041815  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:39.041823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:39.041885  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:39.066151  299667 cri.go:89] found id: ""
	I1205 07:50:39.066174  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.066183  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:39.066189  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:39.066266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:39.090650  299667 cri.go:89] found id: ""
	I1205 07:50:39.090673  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.090682  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:39.090688  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:39.090745  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:39.119700  299667 cri.go:89] found id: ""
	I1205 07:50:39.119732  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.119740  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:39.119747  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:39.119810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:39.144307  299667 cri.go:89] found id: ""
	I1205 07:50:39.144369  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.144389  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:39.144406  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:39.144488  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:39.171025  299667 cri.go:89] found id: ""
	I1205 07:50:39.171048  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.171057  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:39.171063  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:39.171127  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:39.195100  299667 cri.go:89] found id: ""
	I1205 07:50:39.195121  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.195130  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:39.195136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:39.195197  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:39.218959  299667 cri.go:89] found id: ""
	I1205 07:50:39.218980  299667 logs.go:282] 0 containers: []
	W1205 07:50:39.218991  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:39.219000  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:39.219010  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:39.243315  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:39.243346  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:39.270633  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:39.270709  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:39.330141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:39.330172  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:39.345855  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:39.345883  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:39.426940  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:39.419750   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.420563   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422231   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.422524   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:39.423998   11300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:40.603240  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:43.103156  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:41.928763  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:41.939293  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:41.939415  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:41.964816  299667 cri.go:89] found id: ""
	I1205 07:50:41.964850  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.964859  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:41.964865  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:41.964931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:41.990880  299667 cri.go:89] found id: ""
	I1205 07:50:41.990914  299667 logs.go:282] 0 containers: []
	W1205 07:50:41.990923  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:41.990929  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:41.990996  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:42.022456  299667 cri.go:89] found id: ""
	I1205 07:50:42.022483  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.022494  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:42.022501  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:42.022570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:42.049261  299667 cri.go:89] found id: ""
	I1205 07:50:42.049328  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.049352  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:42.049369  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:42.049446  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:42.077034  299667 cri.go:89] found id: ""
	I1205 07:50:42.077108  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.077134  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:42.077255  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:42.077338  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:42.114881  299667 cri.go:89] found id: ""
	I1205 07:50:42.114910  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.114921  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:42.114928  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:42.114994  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:42.151897  299667 cri.go:89] found id: ""
	I1205 07:50:42.151926  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.151936  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:42.151944  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:42.152012  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:42.185532  299667 cri.go:89] found id: ""
	I1205 07:50:42.185556  299667 logs.go:282] 0 containers: []
	W1205 07:50:42.185565  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:42.185574  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:42.185585  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:42.246490  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:42.246537  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:42.262324  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:42.262359  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:42.331135  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:42.322192   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323101   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.323848   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.325628   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:42.326424   11398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:42.331201  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:42.331219  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:42.358803  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:42.358836  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:44.909321  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:44.920001  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:44.920070  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:44.945367  299667 cri.go:89] found id: ""
	I1205 07:50:44.945392  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.945401  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:44.945407  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:44.945463  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:44.970751  299667 cri.go:89] found id: ""
	I1205 07:50:44.970779  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.970788  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:44.970794  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:44.970873  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:44.999654  299667 cri.go:89] found id: ""
	I1205 07:50:44.999678  299667 logs.go:282] 0 containers: []
	W1205 07:50:44.999688  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:44.999694  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:44.999760  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:45.065387  299667 cri.go:89] found id: ""
	I1205 07:50:45.065496  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.065521  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:45.065554  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:45.065661  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1205 07:50:45.105072  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:47.602920  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:45.101338  299667 cri.go:89] found id: ""
	I1205 07:50:45.101365  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.101375  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:45.101386  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:45.101459  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:45.140148  299667 cri.go:89] found id: ""
	I1205 07:50:45.140181  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.140192  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:45.140200  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:45.140301  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:45.178981  299667 cri.go:89] found id: ""
	I1205 07:50:45.179025  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.179035  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:45.179043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:45.179176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:45.219922  299667 cri.go:89] found id: ""
	I1205 07:50:45.219949  299667 logs.go:282] 0 containers: []
	W1205 07:50:45.219958  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:45.219969  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:45.219989  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:45.291787  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:45.291824  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:45.306539  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:45.306565  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:45.383110  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:45.374647   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.375525   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377310   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.377944   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:45.379577   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:45.383171  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:45.383206  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:45.410722  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:45.410808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:47.941304  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:47.952011  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:47.952084  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:47.978179  299667 cri.go:89] found id: ""
	I1205 07:50:47.978201  299667 logs.go:282] 0 containers: []
	W1205 07:50:47.978210  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:47.978216  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:47.978274  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:48.005927  299667 cri.go:89] found id: ""
	I1205 07:50:48.005954  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.005964  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:48.005971  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:48.006042  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:48.040049  299667 cri.go:89] found id: ""
	I1205 07:50:48.040133  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.040156  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:48.040175  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:48.040269  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:48.066524  299667 cri.go:89] found id: ""
	I1205 07:50:48.066549  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.066558  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:48.066564  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:48.066627  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:48.096997  299667 cri.go:89] found id: ""
	I1205 07:50:48.097026  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.097036  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:48.097043  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:48.097103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:48.123968  299667 cri.go:89] found id: ""
	I1205 07:50:48.123990  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.123999  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:48.124005  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:48.124066  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:48.151529  299667 cri.go:89] found id: ""
	I1205 07:50:48.151554  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.151564  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:48.151570  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:48.151629  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:48.181245  299667 cri.go:89] found id: ""
	I1205 07:50:48.181270  299667 logs.go:282] 0 containers: []
	W1205 07:50:48.181279  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:48.181297  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:48.181308  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:48.240786  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:48.240832  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:48.255504  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:48.255533  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:48.325828  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:48.318647   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.319282   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321001   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.321504   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:48.322548   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:48.325849  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:48.325862  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:48.350818  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:48.350898  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 07:50:50.103331  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:52.602745  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:50.887376  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:50.898712  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:50.898787  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:50.926387  299667 cri.go:89] found id: ""
	I1205 07:50:50.926412  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.926421  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:50.926428  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:50.926499  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:50.951318  299667 cri.go:89] found id: ""
	I1205 07:50:50.951341  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.951349  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:50.951356  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:50.951431  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:50.978509  299667 cri.go:89] found id: ""
	I1205 07:50:50.978536  299667 logs.go:282] 0 containers: []
	W1205 07:50:50.978545  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:50.978551  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:50.978614  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:51.017851  299667 cri.go:89] found id: ""
	I1205 07:50:51.017875  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.017884  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:51.017894  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:51.017957  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:51.048705  299667 cri.go:89] found id: ""
	I1205 07:50:51.048772  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.048797  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:51.048815  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:51.048901  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:51.078364  299667 cri.go:89] found id: ""
	I1205 07:50:51.078427  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.078448  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:51.078468  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:51.078560  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:51.110914  299667 cri.go:89] found id: ""
	I1205 07:50:51.110955  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.110965  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:51.110970  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:51.111064  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:51.136737  299667 cri.go:89] found id: ""
	I1205 07:50:51.136762  299667 logs.go:282] 0 containers: []
	W1205 07:50:51.136771  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:51.136781  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:51.136793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:51.197928  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:51.190160   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.190956   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192506   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.192802   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:51.194243   11732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:51.197949  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:51.197961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:51.222938  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:51.222968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:51.253887  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:51.253914  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:51.309729  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:51.309759  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:53.824280  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:53.834821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:53.834895  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:53.882567  299667 cri.go:89] found id: ""
	I1205 07:50:53.882607  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.882617  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:53.882623  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:53.882708  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:53.924413  299667 cri.go:89] found id: ""
	I1205 07:50:53.924439  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.924447  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:53.924454  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:53.924521  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:53.949296  299667 cri.go:89] found id: ""
	I1205 07:50:53.949329  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.949339  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:53.949345  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:53.949421  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:53.973974  299667 cri.go:89] found id: ""
	I1205 07:50:53.974036  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.974050  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:53.974058  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:53.974114  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:53.999073  299667 cri.go:89] found id: ""
	I1205 07:50:53.999139  299667 logs.go:282] 0 containers: []
	W1205 07:50:53.999154  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:53.999162  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:53.999221  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:54.026401  299667 cri.go:89] found id: ""
	I1205 07:50:54.026425  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.026434  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:54.026441  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:54.026523  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:54.056156  299667 cri.go:89] found id: ""
	I1205 07:50:54.056181  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.056191  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:54.056197  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:54.056266  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:54.080916  299667 cri.go:89] found id: ""
	I1205 07:50:54.080955  299667 logs.go:282] 0 containers: []
	W1205 07:50:54.080964  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:54.080973  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:54.080985  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:54.105836  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:54.105870  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:54.134673  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:54.134702  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:54.191141  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:54.191175  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:54.204290  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:54.204332  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:54.267087  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:54.259438   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.260127   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.261603   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.262247   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:54.263866   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 07:50:55.102767  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:57.103256  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:50:59.602402  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:50:56.768821  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:56.779222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:56.779288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:56.807155  299667 cri.go:89] found id: ""
	I1205 07:50:56.807179  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.807188  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:56.807195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:56.807280  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:56.831710  299667 cri.go:89] found id: ""
	I1205 07:50:56.831737  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.831746  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:56.831753  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:56.831812  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:56.867145  299667 cri.go:89] found id: ""
	I1205 07:50:56.867169  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.867178  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:56.867185  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:56.867243  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:56.893127  299667 cri.go:89] found id: ""
	I1205 07:50:56.893152  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.893174  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:56.893180  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:56.893237  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:56.922421  299667 cri.go:89] found id: ""
	I1205 07:50:56.922450  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.922460  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:56.922466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:56.922543  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:56.945778  299667 cri.go:89] found id: ""
	I1205 07:50:56.945808  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.945817  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:56.945823  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:56.945907  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:56.974442  299667 cri.go:89] found id: ""
	I1205 07:50:56.974473  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.974482  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:56.974489  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:56.974559  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:56.998662  299667 cri.go:89] found id: ""
	I1205 07:50:56.998685  299667 logs.go:282] 0 containers: []
	W1205 07:50:56.998694  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:56.998703  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:56.998715  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:50:57.058833  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:50:57.058867  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:50:57.072293  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:50:57.072322  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:50:57.139010  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:50:57.131474   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.132108   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133568   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.133885   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:50:57.135363   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:50:57.139030  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:50:57.139042  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:50:57.163607  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:57.163639  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.693334  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:50:59.704756  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:50:59.704870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:50:59.732171  299667 cri.go:89] found id: ""
	I1205 07:50:59.732198  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.732208  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:50:59.732214  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:50:59.732272  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:50:59.757954  299667 cri.go:89] found id: ""
	I1205 07:50:59.757981  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.757990  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:50:59.757996  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:50:59.758076  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:50:59.787824  299667 cri.go:89] found id: ""
	I1205 07:50:59.787846  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.787855  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:50:59.787862  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:50:59.787977  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:50:59.813474  299667 cri.go:89] found id: ""
	I1205 07:50:59.813497  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.813506  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:50:59.813512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:50:59.813580  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:50:59.842057  299667 cri.go:89] found id: ""
	I1205 07:50:59.842079  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.842088  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:50:59.842094  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:50:59.842162  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:50:59.872569  299667 cri.go:89] found id: ""
	I1205 07:50:59.872593  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.872602  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:50:59.872608  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:50:59.872671  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:50:59.905410  299667 cri.go:89] found id: ""
	I1205 07:50:59.905435  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.905443  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:50:59.905450  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:50:59.905514  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:50:59.932703  299667 cri.go:89] found id: ""
	I1205 07:50:59.932744  299667 logs.go:282] 0 containers: []
	W1205 07:50:59.932754  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:50:59.932763  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:50:59.932774  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:50:59.964043  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:50:59.964069  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:00.020877  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:00.023486  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:00.055130  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:00.055166  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:02.102411  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:04.602356  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:00.182237  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:00.169446   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.170001   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.172936   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.173407   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:00.176709   12088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:00.182280  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:00.182298  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:02.739834  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:02.750886  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:02.750958  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:02.776293  299667 cri.go:89] found id: ""
	I1205 07:51:02.776319  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.776328  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:02.776334  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:02.776393  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:02.803043  299667 cri.go:89] found id: ""
	I1205 07:51:02.803080  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.803089  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:02.803096  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:02.803176  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:02.827935  299667 cri.go:89] found id: ""
	I1205 07:51:02.827957  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.827966  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:02.827972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:02.828031  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:02.859181  299667 cri.go:89] found id: ""
	I1205 07:51:02.859204  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.859215  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:02.859222  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:02.859282  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:02.893626  299667 cri.go:89] found id: ""
	I1205 07:51:02.893668  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.893678  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:02.893685  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:02.893755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:02.924778  299667 cri.go:89] found id: ""
	I1205 07:51:02.924808  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.924818  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:02.924830  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:02.924890  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:02.950184  299667 cri.go:89] found id: ""
	I1205 07:51:02.950211  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.950220  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:02.950229  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:02.950288  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:02.976829  299667 cri.go:89] found id: ""
	I1205 07:51:02.976855  299667 logs.go:282] 0 containers: []
	W1205 07:51:02.976865  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:02.976874  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:02.976885  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:03.015998  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:03.016071  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:03.072438  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:03.072473  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:03.087250  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:03.087283  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:03.153281  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:03.146021   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.146589   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148074   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.148490   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:03.149992   12200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:03.153306  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:03.153319  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:51:07.103249  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	W1205 07:51:09.602341  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:05.678289  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:05.688964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:05.689032  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:05.714382  299667 cri.go:89] found id: ""
	I1205 07:51:05.714403  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.714412  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:05.714419  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:05.714486  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:05.743946  299667 cri.go:89] found id: ""
	I1205 07:51:05.743968  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.743976  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:05.743983  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:05.744043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:05.768270  299667 cri.go:89] found id: ""
	I1205 07:51:05.768293  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.768303  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:05.768309  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:05.768367  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:05.795557  299667 cri.go:89] found id: ""
	I1205 07:51:05.795580  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.795588  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:05.795595  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:05.795652  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:05.820607  299667 cri.go:89] found id: ""
	I1205 07:51:05.820634  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.820643  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:05.820649  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:05.820707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:05.853624  299667 cri.go:89] found id: ""
	I1205 07:51:05.853648  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.853657  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:05.853670  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:05.853752  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:05.885144  299667 cri.go:89] found id: ""
	I1205 07:51:05.885200  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.885213  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:05.885219  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:05.885296  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:05.917755  299667 cri.go:89] found id: ""
	I1205 07:51:05.917777  299667 logs.go:282] 0 containers: []
	W1205 07:51:05.917785  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:05.917794  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:05.917808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:05.978242  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:05.978286  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:05.992931  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:05.992961  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:06.070949  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:06.062573   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.063438   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065095   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.065957   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:06.067687   12303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:06.070979  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:06.070992  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:06.096749  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:06.096780  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.634532  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:08.646959  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:08.647038  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:08.678851  299667 cri.go:89] found id: ""
	I1205 07:51:08.678875  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.678884  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:08.678890  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:08.678954  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:08.702970  299667 cri.go:89] found id: ""
	I1205 07:51:08.702992  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.703001  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:08.703006  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:08.703063  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:08.727238  299667 cri.go:89] found id: ""
	I1205 07:51:08.727259  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.727267  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:08.727273  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:08.727329  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:08.752084  299667 cri.go:89] found id: ""
	I1205 07:51:08.752106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.752114  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:08.752120  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:08.752183  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:08.775775  299667 cri.go:89] found id: ""
	I1205 07:51:08.775797  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.775805  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:08.775811  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:08.775878  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:08.800101  299667 cri.go:89] found id: ""
	I1205 07:51:08.800122  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.800130  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:08.800136  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:08.800193  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:08.826081  299667 cri.go:89] found id: ""
	I1205 07:51:08.826106  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.826115  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:08.826121  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:08.826179  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:08.850937  299667 cri.go:89] found id: ""
	I1205 07:51:08.850969  299667 logs.go:282] 0 containers: []
	W1205 07:51:08.850979  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:08.850987  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:08.851004  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:08.884057  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:08.884093  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:08.946750  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:08.946793  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:08.960852  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:08.960880  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:09.030565  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:09.022707   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.023465   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025187   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.025800   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:09.027345   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:09.030587  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:09.030601  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1205 07:51:11.602638  297527 node_ready.go:55] error getting node "no-preload-241270" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-241270": dial tcp 192.168.76.2:8443: connect: connection refused
	I1205 07:51:12.602298  297527 node_ready.go:38] duration metric: took 6m0.000452624s for node "no-preload-241270" to be "Ready" ...
	I1205 07:51:12.605551  297527 out.go:203] 
	W1205 07:51:12.608371  297527 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 07:51:12.608388  297527 out.go:285] * 
	W1205 07:51:12.610554  297527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:51:12.612665  297527 out.go:203] 
	I1205 07:51:11.556651  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:11.567626  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:11.567701  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:11.595760  299667 cri.go:89] found id: ""
	I1205 07:51:11.595786  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.595795  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:11.595802  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:11.595859  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:11.646030  299667 cri.go:89] found id: ""
	I1205 07:51:11.646056  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.646065  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:11.646072  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:11.646138  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:11.675282  299667 cri.go:89] found id: ""
	I1205 07:51:11.675310  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.675319  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:11.675325  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:11.675385  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:11.699688  299667 cri.go:89] found id: ""
	I1205 07:51:11.699712  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.699721  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:11.699727  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:11.699791  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:11.723819  299667 cri.go:89] found id: ""
	I1205 07:51:11.723843  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.723852  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:11.723859  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:11.723915  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:11.751470  299667 cri.go:89] found id: ""
	I1205 07:51:11.751496  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.751505  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:11.751512  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:11.751568  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:11.775893  299667 cri.go:89] found id: ""
	I1205 07:51:11.775921  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.775929  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:11.775936  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:11.775993  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:11.802990  299667 cri.go:89] found id: ""
	I1205 07:51:11.803012  299667 logs.go:282] 0 containers: []
	W1205 07:51:11.803021  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:11.803033  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:11.803044  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:11.859684  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:11.859767  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:11.876859  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:11.876889  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:11.952118  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:11.944168   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.944893   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.946566   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.947157   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.948800   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:11.944168   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.944893   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.946566   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.947157   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:11.948800   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:11.952191  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:11.952220  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:11.976596  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:11.976630  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:14.510895  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:14.522084  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:14.522151  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:14.554050  299667 cri.go:89] found id: ""
	I1205 07:51:14.554069  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.554078  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:14.554084  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:14.554139  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:14.581712  299667 cri.go:89] found id: ""
	I1205 07:51:14.581732  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.581740  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:14.581746  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:14.581810  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:14.658701  299667 cri.go:89] found id: ""
	I1205 07:51:14.658723  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.658731  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:14.658737  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:14.658803  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:14.686921  299667 cri.go:89] found id: ""
	I1205 07:51:14.686940  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.686948  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:14.686954  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:14.687024  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:14.720928  299667 cri.go:89] found id: ""
	I1205 07:51:14.720949  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.720957  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:14.720972  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:14.721046  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:14.758959  299667 cri.go:89] found id: ""
	I1205 07:51:14.758983  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.758992  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:14.758998  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:14.759054  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:14.810754  299667 cri.go:89] found id: ""
	I1205 07:51:14.810775  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.810888  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:14.810895  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:14.810966  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:14.865350  299667 cri.go:89] found id: ""
	I1205 07:51:14.865369  299667 logs.go:282] 0 containers: []
	W1205 07:51:14.865379  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:14.865387  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:14.865398  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:14.920139  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:14.920170  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:14.973197  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:14.973224  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:15.042929  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:15.042968  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:15.069350  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:15.069377  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:15.167229  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:15.157061   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.158379   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.159455   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.160498   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.161615   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:15.157061   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.158379   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.159455   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.160498   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:15.161615   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:17.667454  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:17.677695  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:17.677767  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:17.710656  299667 cri.go:89] found id: ""
	I1205 07:51:17.710678  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.710687  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:17.710693  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:17.710755  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:17.738643  299667 cri.go:89] found id: ""
	I1205 07:51:17.738665  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.738674  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:17.738680  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:17.738736  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:17.762784  299667 cri.go:89] found id: ""
	I1205 07:51:17.762806  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.762815  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:17.762821  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:17.762880  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:17.788678  299667 cri.go:89] found id: ""
	I1205 07:51:17.788699  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.788714  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:17.788720  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:17.788776  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:17.818009  299667 cri.go:89] found id: ""
	I1205 07:51:17.818031  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.818040  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:17.818046  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:17.818103  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:17.850251  299667 cri.go:89] found id: ""
	I1205 07:51:17.850272  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.850288  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:17.850295  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:17.850354  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:17.879482  299667 cri.go:89] found id: ""
	I1205 07:51:17.879503  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.879512  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:17.879518  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:17.879579  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:17.916240  299667 cri.go:89] found id: ""
	I1205 07:51:17.916261  299667 logs.go:282] 0 containers: []
	W1205 07:51:17.916270  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:17.916278  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:17.916344  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:17.945888  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:17.945915  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:18.004030  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:18.004079  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:18.022346  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:18.022422  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:18.096445  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:18.087987   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.088572   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090232   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090775   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.092338   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:18.087987   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.088572   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090232   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.090775   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:18.092338   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:18.096468  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:18.096481  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:20.623691  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:20.635279  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:20.635409  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:20.670295  299667 cri.go:89] found id: ""
	I1205 07:51:20.670369  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.670390  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:20.670410  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:20.670493  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:20.701924  299667 cri.go:89] found id: ""
	I1205 07:51:20.701948  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.701957  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:20.701964  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:20.702055  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:20.727557  299667 cri.go:89] found id: ""
	I1205 07:51:20.727599  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.727622  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:20.727638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:20.727714  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:20.753615  299667 cri.go:89] found id: ""
	I1205 07:51:20.753640  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.753648  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:20.753655  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:20.753744  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:20.778426  299667 cri.go:89] found id: ""
	I1205 07:51:20.778450  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.778459  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:20.778466  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:20.778556  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:20.803580  299667 cri.go:89] found id: ""
	I1205 07:51:20.803605  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.803615  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:20.803638  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:20.803707  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:20.833142  299667 cri.go:89] found id: ""
	I1205 07:51:20.833193  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.833202  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:20.833208  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:20.833285  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:20.868368  299667 cri.go:89] found id: ""
	I1205 07:51:20.868443  299667 logs.go:282] 0 containers: []
	W1205 07:51:20.868465  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:20.868486  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:20.868523  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:20.895451  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:20.895524  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:20.926652  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:20.926677  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:20.981657  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:20.981692  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:20.995302  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:20.995329  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:21.064074  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:21.055838   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.056503   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.058334   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.059023   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.060931   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:21.055838   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.056503   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.058334   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.059023   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:21.060931   12872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:23.564875  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:23.575583  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:23.575650  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:23.616208  299667 cri.go:89] found id: ""
	I1205 07:51:23.616234  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.616243  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:23.616251  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:23.616314  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:23.645044  299667 cri.go:89] found id: ""
	I1205 07:51:23.645068  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.645077  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:23.645083  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:23.645148  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:23.679840  299667 cri.go:89] found id: ""
	I1205 07:51:23.679861  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.679870  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:23.679876  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:23.679931  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:23.704932  299667 cri.go:89] found id: ""
	I1205 07:51:23.704954  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.704962  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:23.704980  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:23.705040  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:23.730380  299667 cri.go:89] found id: ""
	I1205 07:51:23.730403  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.730411  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:23.730418  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:23.730483  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:23.754200  299667 cri.go:89] found id: ""
	I1205 07:51:23.754224  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.754233  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:23.754240  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:23.754318  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:23.778888  299667 cri.go:89] found id: ""
	I1205 07:51:23.778913  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.778921  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:23.778927  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:23.778983  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:23.803021  299667 cri.go:89] found id: ""
	I1205 07:51:23.803045  299667 logs.go:282] 0 containers: []
	W1205 07:51:23.803054  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:23.803063  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:23.803074  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:23.859725  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:23.859805  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:23.878639  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:23.878714  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:23.953245  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:23.945764   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.946559   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948198   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948513   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.950053   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:23.945764   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.946559   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948198   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.948513   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:23.950053   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:23.953267  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:23.953280  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:23.978428  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:23.978460  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:26.510161  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:26.520589  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:26.520663  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:26.545475  299667 cri.go:89] found id: ""
	I1205 07:51:26.545500  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.545508  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:26.545515  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:26.545570  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:26.570378  299667 cri.go:89] found id: ""
	I1205 07:51:26.570401  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.570409  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:26.570416  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:26.570476  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:26.596521  299667 cri.go:89] found id: ""
	I1205 07:51:26.596547  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.596556  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:26.596562  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:26.596618  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:26.624228  299667 cri.go:89] found id: ""
	I1205 07:51:26.624255  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.624264  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:26.624280  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:26.624336  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:26.650763  299667 cri.go:89] found id: ""
	I1205 07:51:26.650797  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.650807  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:26.650813  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:26.650870  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:26.681944  299667 cri.go:89] found id: ""
	I1205 07:51:26.681972  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.681980  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:26.681987  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:26.682043  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:26.706897  299667 cri.go:89] found id: ""
	I1205 07:51:26.706918  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.706927  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:26.706933  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:26.706991  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:26.732536  299667 cri.go:89] found id: ""
	I1205 07:51:26.732560  299667 logs.go:282] 0 containers: []
	W1205 07:51:26.732569  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:26.732578  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:26.732619  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:26.789640  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:26.789673  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:26.803060  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:26.803089  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:26.884697  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:26.872770   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.877391   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879063   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879460   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.881003   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:26.872770   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.877391   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879063   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.879460   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:26.881003   13084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:26.884720  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:26.884737  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:26.912821  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:26.912856  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:29.445153  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:29.455673  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:29.455740  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:29.479669  299667 cri.go:89] found id: ""
	I1205 07:51:29.479694  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.479702  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:29.479709  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:29.479768  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:29.504129  299667 cri.go:89] found id: ""
	I1205 07:51:29.504151  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.504160  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:29.504166  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:29.504223  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:29.528037  299667 cri.go:89] found id: ""
	I1205 07:51:29.528061  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.528071  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:29.528077  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:29.528137  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:29.553104  299667 cri.go:89] found id: ""
	I1205 07:51:29.553129  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.553138  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:29.553145  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:29.553252  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:29.582155  299667 cri.go:89] found id: ""
	I1205 07:51:29.582180  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.582189  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:29.582195  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:29.582251  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:29.616156  299667 cri.go:89] found id: ""
	I1205 07:51:29.616181  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.616190  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:29.616205  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:29.616279  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:29.643373  299667 cri.go:89] found id: ""
	I1205 07:51:29.643399  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.643407  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:29.643413  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:29.643474  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:29.669624  299667 cri.go:89] found id: ""
	I1205 07:51:29.669649  299667 logs.go:282] 0 containers: []
	W1205 07:51:29.669658  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:29.669667  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:29.669678  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:29.725864  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:29.725897  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:29.739284  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:29.739311  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:29.812338  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:29.804736   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.805417   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807055   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807553   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.809095   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:29.804736   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.805417   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807055   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.807553   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:29.809095   13197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:29.812358  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:29.812371  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:29.837776  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:29.837808  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:32.374773  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:32.385440  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:51:32.385519  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:51:32.410264  299667 cri.go:89] found id: ""
	I1205 07:51:32.410285  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.410294  299667 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:51:32.410301  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 07:51:32.410380  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:51:32.435693  299667 cri.go:89] found id: ""
	I1205 07:51:32.435716  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.435724  299667 logs.go:284] No container was found matching "etcd"
	I1205 07:51:32.435730  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 07:51:32.435789  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:51:32.459782  299667 cri.go:89] found id: ""
	I1205 07:51:32.459854  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.459865  299667 logs.go:284] No container was found matching "coredns"
	I1205 07:51:32.459872  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:51:32.460140  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:51:32.490196  299667 cri.go:89] found id: ""
	I1205 07:51:32.490221  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.490230  299667 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:51:32.490236  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:51:32.490302  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:51:32.515432  299667 cri.go:89] found id: ""
	I1205 07:51:32.515456  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.515465  299667 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:51:32.515472  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:51:32.515535  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:51:32.544631  299667 cri.go:89] found id: ""
	I1205 07:51:32.544657  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.544666  299667 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:51:32.544672  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 07:51:32.544733  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:51:32.568734  299667 cri.go:89] found id: ""
	I1205 07:51:32.568759  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.568768  299667 logs.go:284] No container was found matching "kindnet"
	I1205 07:51:32.568785  299667 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 07:51:32.568841  299667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 07:51:32.593347  299667 cri.go:89] found id: ""
	I1205 07:51:32.593375  299667 logs.go:282] 0 containers: []
	W1205 07:51:32.593385  299667 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 07:51:32.593394  299667 logs.go:123] Gathering logs for kubelet ...
	I1205 07:51:32.593406  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:51:32.663939  299667 logs.go:123] Gathering logs for dmesg ...
	I1205 07:51:32.663975  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:51:32.678486  299667 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:51:32.678514  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:51:32.740819  299667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:32.733560   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.734160   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.735620   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.736048   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.737671   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 07:51:32.733560   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.734160   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.735620   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.736048   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:32.737671   13310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:51:32.740842  299667 logs.go:123] Gathering logs for containerd ...
	I1205 07:51:32.740854  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 07:51:32.765510  299667 logs.go:123] Gathering logs for container status ...
	I1205 07:51:32.765539  299667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:51:35.296522  299667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:51:35.310277  299667 out.go:203] 
	W1205 07:51:35.313261  299667 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1205 07:51:35.313316  299667 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1205 07:51:35.313333  299667 out.go:285] * Related issues:
	W1205 07:51:35.313353  299667 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1205 07:51:35.313373  299667 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1205 07:51:35.316371  299667 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209287352Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209303147Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209319738Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209338060Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209354355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209371619Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209407246Z" level=info msg="runtime interface created"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209414106Z" level=info msg="created NRI interface"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209431698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209473470Z" level=info msg="Connect containerd service"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.209745990Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.210997942Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227442652Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227515662Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227533837Z" level=info msg="Start subscribing containerd event"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.227584988Z" level=info msg="Start recovering state"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248902324Z" level=info msg="Start event monitor"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248944278Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248954567Z" level=info msg="Start streaming server"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248967343Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248975425Z" level=info msg="runtime interface starting up..."
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.248982071Z" level=info msg="starting plugins..."
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.249010797Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 07:45:31 newest-cni-622440 containerd[554]: time="2025-12-05T07:45:31.249144378Z" level=info msg="containerd successfully booted in 0.058238s"
	Dec 05 07:45:31 newest-cni-622440 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:51:49.178211   13989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:49.178786   13989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:49.180354   13989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:49.180786   13989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 07:51:49.182381   13989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:51:49 up  2:34,  0 user,  load average: 1.15, 0.85, 1.31
	Linux newest-cni-622440 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:51:46 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:46 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 05 07:51:46 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:46 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:46 newest-cni-622440 kubelet[13852]: E1205 07:51:46.928341   13852 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:46 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:46 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:47 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 05 07:51:47 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:47 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:47 newest-cni-622440 kubelet[13888]: E1205 07:51:47.668751   13888 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:47 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:47 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:48 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 05 07:51:48 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:48 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:48 newest-cni-622440 kubelet[13894]: E1205 07:51:48.423595   13894 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:48 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:48 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:51:49 newest-cni-622440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
	Dec 05 07:51:49 newest-cni-622440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:49 newest-cni-622440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:51:49 newest-cni-622440 kubelet[13982]: E1205 07:51:49.152462   13982 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:51:49 newest-cni-622440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:51:49 newest-cni-622440 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-622440 -n newest-cni-622440: exit status 2 (344.037268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-622440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (274.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:00:55.988493    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:00:58.370324    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:01:15.487043    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:15.494343    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:15.505710    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:15.527615    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:01:15.568981    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:01:15.650934    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:01:16.776854    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:01:18.058120    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:01:20.620418    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:01:25.742391    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:01:35.983779    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:01:56.465586    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:02:16.967642    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:02:17.909873    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1205 08:02:32.760277    4192 config.go:182] Loaded profile config "bridge-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:02:37.426871    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:02:51.040628    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:51.047001    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:51.058654    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:51.080087    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:51.121506    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:51.202955    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:51.364705    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:02:51.686584    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:02:52.328755    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:02:53.610638    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:02:56.172497    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:03:01.294890    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:03:01.797435    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:03:11.309054    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:03:11.536638    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:03:14.508697    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:03:32.018622    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:03:42.211848    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/auto-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:03:59.348216    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/calico-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:07.138411    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:07.144758    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:07.156153    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:07.177707    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:07.219187    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:07.300774    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:07.462281    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:07.784549    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 08:04:08.426725    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:09.709120    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:12.271124    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:12.980856    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/custom-flannel-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:14.019852    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:17.392541    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:27.634096    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:34.042658    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/kindnet-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:04:48.115736    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/enable-default-cni-183381/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 2 (318.668395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-241270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-241270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.354µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-241270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-241270
helpers_test.go:243: (dbg) docker inspect no-preload-241270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	        "Created": "2025-12-05T07:34:52.488952391Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297658,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:45:04.977832919Z",
	            "FinishedAt": "2025-12-05T07:45:03.670727358Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/hosts",
	        "LogPath": "/var/lib/docker/containers/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896/419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896-json.log",
	        "Name": "/no-preload-241270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e4a267ba54624a3dd3b30962bd90f972b4351d752e9195bd2935e7194d896",
	                "LowerDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/177b019daa10efd79c896f41e96546a77bca944a27a19fe62261f7f0bd6a46e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-241270",
	                "Source": "/var/lib/docker/volumes/no-preload-241270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241270",
	                "name.minikube.sigs.k8s.io": "no-preload-241270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a57e08b617e6c99db8e0606f807966baa2265951deec9d7f31b28b674772ba7",
	            "SandboxKey": "/var/run/docker/netns/6a57e08b617e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:5e:e9:4a:59:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "509cbc0434c71e77097af60a2b0ce9a4473551172a41d0f484ec4e134db3ab73",
	                    "EndpointID": "8aadf1070cfccbd0175d1614c4a1ee7cb617e6ca8ef7cab3c7e2ce89af3cf831",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241270",
	                        "419e4a267ba5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270: exit status 2 (320.247783ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241270 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-183381 sudo iptables -t nat -L -n -v                                 │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo systemctl status kubelet --all --full --no-pager         │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo systemctl cat kubelet --no-pager                         │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo systemctl status docker --all --full --no-pager          │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p bridge-183381 sudo systemctl cat docker --no-pager                          │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo cat /etc/docker/daemon.json                              │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p bridge-183381 sudo docker system info                                       │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p bridge-183381 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p bridge-183381 sudo systemctl cat cri-docker --no-pager                      │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p bridge-183381 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo cri-dockerd --version                                    │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo systemctl status containerd --all --full --no-pager      │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo systemctl cat containerd --no-pager                      │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo cat /lib/systemd/system/containerd.service               │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo cat /etc/containerd/config.toml                          │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo containerd config dump                                   │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo systemctl status crio --all --full --no-pager            │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │                     │
	│ ssh     │ -p bridge-183381 sudo systemctl cat crio --no-pager                            │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ ssh     │ -p bridge-183381 sudo crio config                                              │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:02 UTC │
	│ delete  │ -p bridge-183381                                                               │ bridge-183381 │ jenkins │ v1.37.0 │ 05 Dec 25 08:02 UTC │ 05 Dec 25 08:03 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 08:01:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 08:01:18.453600  360803 out.go:360] Setting OutFile to fd 1 ...
	I1205 08:01:18.453730  360803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:01:18.453740  360803 out.go:374] Setting ErrFile to fd 2...
	I1205 08:01:18.453746  360803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 08:01:18.454014  360803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 08:01:18.454419  360803 out.go:368] Setting JSON to false
	I1205 08:01:18.455299  360803 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9825,"bootTime":1764911853,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 08:01:18.456094  360803 start.go:143] virtualization:  
	I1205 08:01:18.460439  360803 out.go:179] * [bridge-183381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 08:01:18.465027  360803 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 08:01:18.465101  360803 notify.go:221] Checking for updates...
	I1205 08:01:18.471813  360803 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 08:01:18.475037  360803 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 08:01:18.478186  360803 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 08:01:18.481342  360803 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 08:01:18.484306  360803 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 08:01:18.487877  360803 config.go:182] Loaded profile config "no-preload-241270": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 08:01:18.487976  360803 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 08:01:18.523691  360803 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 08:01:18.523811  360803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:01:18.580079  360803 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 08:01:18.570321788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 08:01:18.580184  360803 docker.go:319] overlay module found
	I1205 08:01:18.583542  360803 out.go:179] * Using the docker driver based on user configuration
	I1205 08:01:18.586521  360803 start.go:309] selected driver: docker
	I1205 08:01:18.586541  360803 start.go:927] validating driver "docker" against <nil>
	I1205 08:01:18.586556  360803 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 08:01:18.587302  360803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 08:01:18.651358  360803 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 08:01:18.641126778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 08:01:18.651611  360803 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 08:01:18.651847  360803 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 08:01:18.655115  360803 out.go:179] * Using Docker driver with root privileges
	I1205 08:01:18.658120  360803 cni.go:84] Creating CNI manager for "bridge"
	I1205 08:01:18.658144  360803 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 08:01:18.658236  360803 start.go:353] cluster config:
	{Name:bridge-183381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-183381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:01:18.661568  360803 out.go:179] * Starting "bridge-183381" primary control-plane node in "bridge-183381" cluster
	I1205 08:01:18.664536  360803 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 08:01:18.667488  360803 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 08:01:18.670385  360803 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 08:01:18.670440  360803 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1205 08:01:18.670468  360803 cache.go:65] Caching tarball of preloaded images
	I1205 08:01:18.670565  360803 preload.go:238] Found /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 08:01:18.670579  360803 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1205 08:01:18.670681  360803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/config.json ...
	I1205 08:01:18.670706  360803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/config.json: {Name:mk2f795ecdf3b9669a0e57e27def34577bc2a2a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:18.670873  360803 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 08:01:18.690871  360803 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 08:01:18.690893  360803 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 08:01:18.690918  360803 cache.go:243] Successfully downloaded all kic artifacts
	I1205 08:01:18.690962  360803 start.go:360] acquireMachinesLock for bridge-183381: {Name:mkda1fe1a52c39dbd9e52277ace78327c38268e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 08:01:18.691068  360803 start.go:364] duration metric: took 86.007µs to acquireMachinesLock for "bridge-183381"
	I1205 08:01:18.691098  360803 start.go:93] Provisioning new machine with config: &{Name:bridge-183381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-183381 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 08:01:18.691190  360803 start.go:125] createHost starting for "" (driver="docker")
	I1205 08:01:18.694660  360803 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 08:01:18.694890  360803 start.go:159] libmachine.API.Create for "bridge-183381" (driver="docker")
	I1205 08:01:18.694945  360803 client.go:173] LocalClient.Create starting
	I1205 08:01:18.695059  360803 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
	I1205 08:01:18.695101  360803 main.go:143] libmachine: Decoding PEM data...
	I1205 08:01:18.695121  360803 main.go:143] libmachine: Parsing certificate...
	I1205 08:01:18.695175  360803 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
	I1205 08:01:18.695197  360803 main.go:143] libmachine: Decoding PEM data...
	I1205 08:01:18.695212  360803 main.go:143] libmachine: Parsing certificate...
	I1205 08:01:18.695577  360803 cli_runner.go:164] Run: docker network inspect bridge-183381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 08:01:18.711812  360803 cli_runner.go:211] docker network inspect bridge-183381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 08:01:18.711906  360803 network_create.go:284] running [docker network inspect bridge-183381] to gather additional debugging logs...
	I1205 08:01:18.711928  360803 cli_runner.go:164] Run: docker network inspect bridge-183381
	W1205 08:01:18.728158  360803 cli_runner.go:211] docker network inspect bridge-183381 returned with exit code 1
	I1205 08:01:18.728189  360803 network_create.go:287] error running [docker network inspect bridge-183381]: docker network inspect bridge-183381: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-183381 not found
	I1205 08:01:18.728202  360803 network_create.go:289] output of [docker network inspect bridge-183381]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-183381 not found
	
	** /stderr **
	I1205 08:01:18.728302  360803 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 08:01:18.745007  360803 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
	I1205 08:01:18.745405  360803 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1cd1afdbbadd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:59:ef:51:06:b9} reservation:<nil>}
	I1205 08:01:18.745807  360803 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6f9d4dd0f896 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:a5:7b:90:e8:f8} reservation:<nil>}
	I1205 08:01:18.746108  360803 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-509cbc0434c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:5b:c8:fd:a0:2d} reservation:<nil>}
	I1205 08:01:18.746571  360803 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1cb50}
	I1205 08:01:18.746596  360803 network_create.go:124] attempt to create docker network bridge-183381 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 08:01:18.746658  360803 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-183381 bridge-183381
	I1205 08:01:18.808155  360803 network_create.go:108] docker network bridge-183381 192.168.85.0/24 created
	I1205 08:01:18.808191  360803 kic.go:121] calculated static IP "192.168.85.2" for the "bridge-183381" container
	I1205 08:01:18.808266  360803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 08:01:18.825532  360803 cli_runner.go:164] Run: docker volume create bridge-183381 --label name.minikube.sigs.k8s.io=bridge-183381 --label created_by.minikube.sigs.k8s.io=true
	I1205 08:01:18.844447  360803 oci.go:103] Successfully created a docker volume bridge-183381
	I1205 08:01:18.844548  360803 cli_runner.go:164] Run: docker run --rm --name bridge-183381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-183381 --entrypoint /usr/bin/test -v bridge-183381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 08:01:19.372522  360803 oci.go:107] Successfully prepared a docker volume bridge-183381
	I1205 08:01:19.372589  360803 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 08:01:19.372599  360803 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 08:01:19.372680  360803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-183381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 08:01:23.310649  360803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-183381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.937929705s)
	I1205 08:01:23.310682  360803 kic.go:203] duration metric: took 3.938078925s to extract preloaded images to volume ...
	W1205 08:01:23.310823  360803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 08:01:23.310947  360803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 08:01:23.365126  360803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-183381 --name bridge-183381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-183381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-183381 --network bridge-183381 --ip 192.168.85.2 --volume bridge-183381:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 08:01:23.674782  360803 cli_runner.go:164] Run: docker container inspect bridge-183381 --format={{.State.Running}}
	I1205 08:01:23.693783  360803 cli_runner.go:164] Run: docker container inspect bridge-183381 --format={{.State.Status}}
	I1205 08:01:23.724114  360803 cli_runner.go:164] Run: docker exec bridge-183381 stat /var/lib/dpkg/alternatives/iptables
	I1205 08:01:23.772388  360803 oci.go:144] the created container "bridge-183381" has a running status.
	I1205 08:01:23.772416  360803 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa...
	I1205 08:01:24.121044  360803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 08:01:24.150172  360803 cli_runner.go:164] Run: docker container inspect bridge-183381 --format={{.State.Status}}
	I1205 08:01:24.189916  360803 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 08:01:24.189940  360803 kic_runner.go:114] Args: [docker exec --privileged bridge-183381 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 08:01:24.265805  360803 cli_runner.go:164] Run: docker container inspect bridge-183381 --format={{.State.Status}}
	I1205 08:01:24.296553  360803 machine.go:94] provisionDockerMachine start ...
	I1205 08:01:24.296648  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:24.318537  360803 main.go:143] libmachine: Using SSH client type: native
	I1205 08:01:24.318871  360803 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 08:01:24.318881  360803 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 08:01:24.319635  360803 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 08:01:27.476949  360803 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-183381
	
	I1205 08:01:27.476973  360803 ubuntu.go:182] provisioning hostname "bridge-183381"
	I1205 08:01:27.477058  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:27.495649  360803 main.go:143] libmachine: Using SSH client type: native
	I1205 08:01:27.495981  360803 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 08:01:27.495998  360803 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-183381 && echo "bridge-183381" | sudo tee /etc/hostname
	I1205 08:01:27.654866  360803 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-183381
	
	I1205 08:01:27.655039  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:27.673476  360803 main.go:143] libmachine: Using SSH client type: native
	I1205 08:01:27.673800  360803 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1205 08:01:27.673824  360803 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-183381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-183381/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-183381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 08:01:27.825463  360803 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 08:01:27.825487  360803 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
	I1205 08:01:27.825512  360803 ubuntu.go:190] setting up certificates
	I1205 08:01:27.825521  360803 provision.go:84] configureAuth start
	I1205 08:01:27.825581  360803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-183381
	I1205 08:01:27.843644  360803 provision.go:143] copyHostCerts
	I1205 08:01:27.843713  360803 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
	I1205 08:01:27.843727  360803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
	I1205 08:01:27.843804  360803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
	I1205 08:01:27.843902  360803 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
	I1205 08:01:27.843912  360803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
	I1205 08:01:27.843954  360803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
	I1205 08:01:27.844014  360803 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
	I1205 08:01:27.844027  360803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
	I1205 08:01:27.844059  360803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
	I1205 08:01:27.844142  360803 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.bridge-183381 san=[127.0.0.1 192.168.85.2 bridge-183381 localhost minikube]
	I1205 08:01:27.992089  360803 provision.go:177] copyRemoteCerts
	I1205 08:01:27.992163  360803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 08:01:27.992204  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:28.018283  360803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa Username:docker}
	I1205 08:01:28.127060  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 08:01:28.150084  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 08:01:28.171052  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 08:01:28.188755  360803 provision.go:87] duration metric: took 363.209951ms to configureAuth
	I1205 08:01:28.188779  360803 ubuntu.go:206] setting minikube options for container-runtime
	I1205 08:01:28.188970  360803 config.go:182] Loaded profile config "bridge-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 08:01:28.188977  360803 machine.go:97] duration metric: took 3.892408541s to provisionDockerMachine
	I1205 08:01:28.188984  360803 client.go:176] duration metric: took 9.494027711s to LocalClient.Create
	I1205 08:01:28.189008  360803 start.go:167] duration metric: took 9.494119421s to libmachine.API.Create "bridge-183381"
	I1205 08:01:28.189017  360803 start.go:293] postStartSetup for "bridge-183381" (driver="docker")
	I1205 08:01:28.189027  360803 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 08:01:28.189078  360803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 08:01:28.189116  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:28.206732  360803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa Username:docker}
	I1205 08:01:28.309250  360803 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 08:01:28.312424  360803 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 08:01:28.312452  360803 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 08:01:28.312463  360803 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
	I1205 08:01:28.312516  360803 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
	I1205 08:01:28.312619  360803 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
	I1205 08:01:28.312725  360803 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 08:01:28.320139  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
	I1205 08:01:28.337806  360803 start.go:296] duration metric: took 148.774294ms for postStartSetup
	I1205 08:01:28.338176  360803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-183381
	I1205 08:01:28.355058  360803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/config.json ...
	I1205 08:01:28.355337  360803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 08:01:28.355385  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:28.373411  360803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa Username:docker}
	I1205 08:01:28.478212  360803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 08:01:28.482931  360803 start.go:128] duration metric: took 9.791726816s to createHost
	I1205 08:01:28.482956  360803 start.go:83] releasing machines lock for "bridge-183381", held for 9.791874517s
	I1205 08:01:28.483025  360803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-183381
	I1205 08:01:28.500132  360803 ssh_runner.go:195] Run: cat /version.json
	I1205 08:01:28.500184  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:28.500265  360803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 08:01:28.500327  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:28.521483  360803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa Username:docker}
	I1205 08:01:28.534827  360803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa Username:docker}
	I1205 08:01:28.624837  360803 ssh_runner.go:195] Run: systemctl --version
	I1205 08:01:28.713102  360803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 08:01:28.717369  360803 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 08:01:28.717440  360803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 08:01:28.744193  360803 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 08:01:28.744220  360803 start.go:496] detecting cgroup driver to use...
	I1205 08:01:28.744254  360803 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 08:01:28.744314  360803 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 08:01:28.760067  360803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 08:01:28.772962  360803 docker.go:218] disabling cri-docker service (if available) ...
	I1205 08:01:28.773022  360803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 08:01:28.790377  360803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 08:01:28.808564  360803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 08:01:28.957901  360803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 08:01:29.084580  360803 docker.go:234] disabling docker service ...
	I1205 08:01:29.084695  360803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 08:01:29.107186  360803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 08:01:29.120277  360803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 08:01:29.243580  360803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 08:01:29.371250  360803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 08:01:29.383655  360803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 08:01:29.396888  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1205 08:01:29.406092  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 08:01:29.415401  360803 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 08:01:29.415511  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 08:01:29.424829  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:01:29.434166  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 08:01:29.442862  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 08:01:29.451645  360803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 08:01:29.459305  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 08:01:29.468470  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 08:01:29.477608  360803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 08:01:29.486318  360803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 08:01:29.494027  360803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 08:01:29.501946  360803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:01:29.612225  360803 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 08:01:29.734622  360803 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 08:01:29.734697  360803 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 08:01:29.738401  360803 start.go:564] Will wait 60s for crictl version
	I1205 08:01:29.738465  360803 ssh_runner.go:195] Run: which crictl
	I1205 08:01:29.741850  360803 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 08:01:29.767083  360803 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1205 08:01:29.767154  360803 ssh_runner.go:195] Run: containerd --version
	I1205 08:01:29.791517  360803 ssh_runner.go:195] Run: containerd --version
	I1205 08:01:29.814994  360803 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.5 ...
	I1205 08:01:29.818076  360803 cli_runner.go:164] Run: docker network inspect bridge-183381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 08:01:29.834073  360803 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 08:01:29.838004  360803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:01:29.847538  360803 kubeadm.go:884] updating cluster {Name:bridge-183381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-183381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 08:01:29.847656  360803 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 08:01:29.847747  360803 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 08:01:29.874759  360803 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 08:01:29.874784  360803 containerd.go:534] Images already preloaded, skipping extraction
	I1205 08:01:29.874843  360803 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 08:01:29.899364  360803 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 08:01:29.899386  360803 cache_images.go:86] Images are preloaded, skipping loading
	I1205 08:01:29.899394  360803 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1205 08:01:29.899484  360803 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-183381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-183381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1205 08:01:29.899561  360803 ssh_runner.go:195] Run: sudo crictl info
	I1205 08:01:29.931606  360803 cni.go:84] Creating CNI manager for "bridge"
	I1205 08:01:29.931645  360803 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 08:01:29.931668  360803 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-183381 NodeName:bridge-183381 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 08:01:29.931795  360803 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-183381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 08:01:29.931870  360803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 08:01:29.939615  360803 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 08:01:29.939692  360803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 08:01:29.947167  360803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1205 08:01:29.959782  360803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 08:01:29.972305  360803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1205 08:01:29.985252  360803 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 08:01:29.988908  360803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 08:01:29.998699  360803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:01:30.148943  360803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:01:30.167714  360803 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381 for IP: 192.168.85.2
	I1205 08:01:30.167734  360803 certs.go:195] generating shared ca certs ...
	I1205 08:01:30.167750  360803 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:30.167922  360803 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
	I1205 08:01:30.167990  360803 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
	I1205 08:01:30.168011  360803 certs.go:257] generating profile certs ...
	I1205 08:01:30.168070  360803 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/client.key
	I1205 08:01:30.168095  360803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/client.crt with IP's: []
	I1205 08:01:30.329818  360803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/client.crt ...
	I1205 08:01:30.329855  360803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/client.crt: {Name:mk616265c59c71019abc5abe503c2775b34a148f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:30.330068  360803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/client.key ...
	I1205 08:01:30.330081  360803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/client.key: {Name:mk8f5445d93d87057e49271471544d57c6939c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:30.330179  360803 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.key.df49a299
	I1205 08:01:30.330195  360803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.crt.df49a299 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 08:01:30.611821  360803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.crt.df49a299 ...
	I1205 08:01:30.611852  360803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.crt.df49a299: {Name:mk39811c6cab743589885b627eac899ec16d0491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:30.612041  360803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.key.df49a299 ...
	I1205 08:01:30.612055  360803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.key.df49a299: {Name:mkf52983e04c7b7d4a9ac4b02e7065f375cbfbd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:30.612144  360803 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.crt.df49a299 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.crt
	I1205 08:01:30.612229  360803 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.key.df49a299 -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.key
	I1205 08:01:30.612288  360803 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/proxy-client.key
	I1205 08:01:30.612307  360803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/proxy-client.crt with IP's: []
	I1205 08:01:31.108716  360803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/proxy-client.crt ...
	I1205 08:01:31.108749  360803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/proxy-client.crt: {Name:mk5672bef4b627281062bb6364a7c63ff4a0fa1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:31.108922  360803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/proxy-client.key ...
	I1205 08:01:31.108937  360803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/proxy-client.key: {Name:mkda9b00c4d39cb9015eedb66ede3e1ac5ed79be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:31.109120  360803 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
	W1205 08:01:31.109190  360803 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
	I1205 08:01:31.109205  360803 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 08:01:31.109239  360803 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
	I1205 08:01:31.109268  360803 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
	I1205 08:01:31.109296  360803 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
	I1205 08:01:31.109345  360803 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
	I1205 08:01:31.110752  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 08:01:31.145371  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 08:01:31.180127  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 08:01:31.213980  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 08:01:31.232612  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 08:01:31.250478  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 08:01:31.268163  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 08:01:31.285856  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/bridge-183381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 08:01:31.303489  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
	I1205 08:01:31.321276  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
	I1205 08:01:31.338807  360803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 08:01:31.355314  360803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 08:01:31.368720  360803 ssh_runner.go:195] Run: openssl version
	I1205 08:01:31.375081  360803 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:01:31.382881  360803 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 08:01:31.391762  360803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:01:31.396252  360803 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:06 /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:01:31.396348  360803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 08:01:31.437690  360803 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 08:01:31.445742  360803 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 08:01:31.453467  360803 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
	I1205 08:01:31.461324  360803 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
	I1205 08:01:31.469932  360803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
	I1205 08:01:31.473630  360803 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:15 /usr/share/ca-certificates/4192.pem
	I1205 08:01:31.473706  360803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
	I1205 08:01:31.514680  360803 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 08:01:31.522191  360803 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
	I1205 08:01:31.529551  360803 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
	I1205 08:01:31.536925  360803 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
	I1205 08:01:31.544506  360803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
	I1205 08:01:31.548389  360803 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:15 /usr/share/ca-certificates/41922.pem
	I1205 08:01:31.548453  360803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
	I1205 08:01:31.589129  360803 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 08:01:31.597783  360803 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
	I1205 08:01:31.605712  360803 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 08:01:31.609545  360803 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 08:01:31.609601  360803 kubeadm.go:401] StartCluster: {Name:bridge-183381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-183381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 08:01:31.609684  360803 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 08:01:31.609752  360803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 08:01:31.635760  360803 cri.go:89] found id: ""
	I1205 08:01:31.635876  360803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 08:01:31.643789  360803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 08:01:31.651571  360803 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 08:01:31.651667  360803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 08:01:31.659372  360803 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 08:01:31.659395  360803 kubeadm.go:158] found existing configuration files:
	
	I1205 08:01:31.659467  360803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 08:01:31.667422  360803 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 08:01:31.667520  360803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 08:01:31.674821  360803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 08:01:31.682266  360803 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 08:01:31.682340  360803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 08:01:31.690216  360803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 08:01:31.698048  360803 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 08:01:31.698116  360803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 08:01:31.705873  360803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 08:01:31.713850  360803 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 08:01:31.713939  360803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 08:01:31.721510  360803 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 08:01:31.790174  360803 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 08:01:31.790412  360803 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 08:01:31.873565  360803 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 08:01:49.869136  360803 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 08:01:49.869234  360803 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 08:01:49.869328  360803 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 08:01:49.869387  360803 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 08:01:49.869425  360803 kubeadm.go:319] OS: Linux
	I1205 08:01:49.869473  360803 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 08:01:49.869524  360803 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 08:01:49.869574  360803 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 08:01:49.869625  360803 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 08:01:49.869680  360803 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 08:01:49.869732  360803 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 08:01:49.869781  360803 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 08:01:49.869833  360803 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 08:01:49.869883  360803 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 08:01:49.869957  360803 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 08:01:49.870062  360803 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 08:01:49.870157  360803 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 08:01:49.870222  360803 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 08:01:49.873229  360803 out.go:252]   - Generating certificates and keys ...
	I1205 08:01:49.873332  360803 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 08:01:49.873402  360803 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 08:01:49.873481  360803 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 08:01:49.873542  360803 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 08:01:49.873607  360803 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 08:01:49.873660  360803 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 08:01:49.873717  360803 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 08:01:49.873839  360803 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-183381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 08:01:49.873895  360803 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 08:01:49.874015  360803 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-183381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 08:01:49.874083  360803 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 08:01:49.874149  360803 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 08:01:49.874196  360803 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 08:01:49.874256  360803 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 08:01:49.874309  360803 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 08:01:49.874369  360803 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 08:01:49.874427  360803 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 08:01:49.874494  360803 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 08:01:49.874551  360803 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 08:01:49.874636  360803 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 08:01:49.874706  360803 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 08:01:49.879501  360803 out.go:252]   - Booting up control plane ...
	I1205 08:01:49.879628  360803 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 08:01:49.879733  360803 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 08:01:49.879814  360803 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 08:01:49.879934  360803 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 08:01:49.880046  360803 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 08:01:49.880162  360803 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 08:01:49.880259  360803 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 08:01:49.880304  360803 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 08:01:49.880447  360803 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 08:01:49.880564  360803 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 08:01:49.880631  360803 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50187794s
	I1205 08:01:49.880733  360803 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 08:01:49.880832  360803 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1205 08:01:49.880931  360803 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 08:01:49.881018  360803 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 08:01:49.881099  360803 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.7433166s
	I1205 08:01:49.881185  360803 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.977522983s
	I1205 08:01:49.881261  360803 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001703475s
	I1205 08:01:49.881379  360803 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 08:01:49.881520  360803 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 08:01:49.881585  360803 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 08:01:49.881790  360803 kubeadm.go:319] [mark-control-plane] Marking the node bridge-183381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 08:01:49.881854  360803 kubeadm.go:319] [bootstrap-token] Using token: wjsb5o.h5fyvt5464uviyjr
	I1205 08:01:49.884642  360803 out.go:252]   - Configuring RBAC rules ...
	I1205 08:01:49.884765  360803 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 08:01:49.884862  360803 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 08:01:49.885020  360803 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 08:01:49.885283  360803 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 08:01:49.885416  360803 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 08:01:49.885512  360803 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 08:01:49.885640  360803 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 08:01:49.885693  360803 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 08:01:49.885745  360803 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 08:01:49.885753  360803 kubeadm.go:319] 
	I1205 08:01:49.885817  360803 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 08:01:49.885829  360803 kubeadm.go:319] 
	I1205 08:01:49.885912  360803 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 08:01:49.885919  360803 kubeadm.go:319] 
	I1205 08:01:49.885946  360803 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 08:01:49.886013  360803 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 08:01:49.886070  360803 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 08:01:49.886077  360803 kubeadm.go:319] 
	I1205 08:01:49.886136  360803 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 08:01:49.886143  360803 kubeadm.go:319] 
	I1205 08:01:49.886201  360803 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 08:01:49.886209  360803 kubeadm.go:319] 
	I1205 08:01:49.886265  360803 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 08:01:49.886349  360803 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 08:01:49.886431  360803 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 08:01:49.886438  360803 kubeadm.go:319] 
	I1205 08:01:49.886530  360803 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 08:01:49.886616  360803 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 08:01:49.886623  360803 kubeadm.go:319] 
	I1205 08:01:49.886714  360803 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wjsb5o.h5fyvt5464uviyjr \
	I1205 08:01:49.886829  360803 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7da661c66adcdc7adc5fd75c1776d7f8fbeafbd1c6f82c89d86db02e1912959c \
	I1205 08:01:49.886852  360803 kubeadm.go:319] 	--control-plane 
	I1205 08:01:49.886856  360803 kubeadm.go:319] 
	I1205 08:01:49.886956  360803 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 08:01:49.886964  360803 kubeadm.go:319] 
	I1205 08:01:49.887053  360803 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wjsb5o.h5fyvt5464uviyjr \
	I1205 08:01:49.887181  360803 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7da661c66adcdc7adc5fd75c1776d7f8fbeafbd1c6f82c89d86db02e1912959c 
	I1205 08:01:49.887192  360803 cni.go:84] Creating CNI manager for "bridge"
	I1205 08:01:49.890280  360803 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 08:01:49.893203  360803 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 08:01:49.903082  360803 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 08:01:49.919531  360803 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 08:01:49.919644  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:49.919707  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-183381 minikube.k8s.io/updated_at=2025_12_05T08_01_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=bridge-183381 minikube.k8s.io/primary=true
	I1205 08:01:50.086506  360803 ops.go:34] apiserver oom_adj: -16
	I1205 08:01:50.086537  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:50.587155  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:51.087584  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:51.586652  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:52.086860  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:52.586667  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:53.087179  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:53.587588  360803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 08:01:53.704054  360803 kubeadm.go:1114] duration metric: took 3.784453171s to wait for elevateKubeSystemPrivileges
	I1205 08:01:53.704085  360803 kubeadm.go:403] duration metric: took 22.094489118s to StartCluster
	I1205 08:01:53.704102  360803 settings.go:142] acquiring lock: {Name:mk06b9bee9381067d6ab070738894c9d4c365f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:53.704163  360803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 08:01:53.705152  360803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/kubeconfig: {Name:mk8de0d93059d9209a9a5b34e6cc538d8ac0d743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 08:01:53.705425  360803 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 08:01:53.705501  360803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 08:01:53.705755  360803 config.go:182] Loaded profile config "bridge-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 08:01:53.705793  360803 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 08:01:53.705851  360803 addons.go:70] Setting storage-provisioner=true in profile "bridge-183381"
	I1205 08:01:53.705864  360803 addons.go:239] Setting addon storage-provisioner=true in "bridge-183381"
	I1205 08:01:53.705886  360803 host.go:66] Checking if "bridge-183381" exists ...
	I1205 08:01:53.705988  360803 addons.go:70] Setting default-storageclass=true in profile "bridge-183381"
	I1205 08:01:53.706004  360803 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-183381"
	I1205 08:01:53.706282  360803 cli_runner.go:164] Run: docker container inspect bridge-183381 --format={{.State.Status}}
	I1205 08:01:53.706449  360803 cli_runner.go:164] Run: docker container inspect bridge-183381 --format={{.State.Status}}
	I1205 08:01:53.709526  360803 out.go:179] * Verifying Kubernetes components...
	I1205 08:01:53.713092  360803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 08:01:53.734672  360803 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 08:01:53.737838  360803 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:01:53.737860  360803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 08:01:53.737921  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:53.752313  360803 addons.go:239] Setting addon default-storageclass=true in "bridge-183381"
	I1205 08:01:53.752357  360803 host.go:66] Checking if "bridge-183381" exists ...
	I1205 08:01:53.752843  360803 cli_runner.go:164] Run: docker container inspect bridge-183381 --format={{.State.Status}}
	I1205 08:01:53.763538  360803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa Username:docker}
	I1205 08:01:53.795289  360803 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 08:01:53.795310  360803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 08:01:53.795377  360803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-183381
	I1205 08:01:53.820384  360803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/bridge-183381/id_rsa Username:docker}
	I1205 08:01:53.989330  360803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 08:01:54.019279  360803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 08:01:54.019452  360803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 08:01:54.081729  360803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 08:01:55.056185  360803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.066817162s)
	I1205 08:01:55.056296  360803 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.036825742s)
	I1205 08:01:55.057129  360803 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.03782005s)
	I1205 08:01:55.057272  360803 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1205 08:01:55.059712  360803 node_ready.go:35] waiting up to 15m0s for node "bridge-183381" to be "Ready" ...
	I1205 08:01:55.096250  360803 node_ready.go:49] node "bridge-183381" is "Ready"
	I1205 08:01:55.096285  360803 node_ready.go:38] duration metric: took 36.51298ms for node "bridge-183381" to be "Ready" ...
	I1205 08:01:55.096302  360803 api_server.go:52] waiting for apiserver process to appear ...
	I1205 08:01:55.096357  360803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 08:01:55.139817  360803 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 08:01:55.142904  360803 addons.go:530] duration metric: took 1.437106003s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 08:01:55.233767  360803 api_server.go:72] duration metric: took 1.528316115s to wait for apiserver process to appear ...
	I1205 08:01:55.233793  360803 api_server.go:88] waiting for apiserver healthz status ...
	I1205 08:01:55.233811  360803 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 08:01:55.262999  360803 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1205 08:01:55.265716  360803 api_server.go:141] control plane version: v1.34.2
	I1205 08:01:55.265755  360803 api_server.go:131] duration metric: took 31.955682ms to wait for apiserver health ...
	I1205 08:01:55.265765  360803 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 08:01:55.280471  360803 system_pods.go:59] 8 kube-system pods found
	I1205 08:01:55.280517  360803 system_pods.go:61] "coredns-66bc5c9577-q884l" [74f41c26-9198-4f8a-ae22-52179118ca67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:55.280525  360803 system_pods.go:61] "coredns-66bc5c9577-s6ks4" [c08b86c9-b018-4ce4-b6d4-67098ccdac5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:55.280531  360803 system_pods.go:61] "etcd-bridge-183381" [f1b85224-1cb3-4122-8238-978f649c71cb] Running
	I1205 08:01:55.280538  360803 system_pods.go:61] "kube-apiserver-bridge-183381" [7d25209e-11c7-4535-aa84-2a85d8fbccba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:01:55.280544  360803 system_pods.go:61] "kube-controller-manager-bridge-183381" [b2406d16-bfb3-4942-84e6-16c8891ac2a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:01:55.280552  360803 system_pods.go:61] "kube-proxy-rkg4f" [b52c5121-0a6f-47be-8ad6-7bcf194fa918] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:01:55.280558  360803 system_pods.go:61] "kube-scheduler-bridge-183381" [366fbc0b-c997-4173-aab8-30debe96101e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 08:01:55.280575  360803 system_pods.go:61] "storage-provisioner" [b48d1889-8658-4787-9324-d52f49793eeb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:55.280582  360803 system_pods.go:74] duration metric: took 14.81149ms to wait for pod list to return data ...
	I1205 08:01:55.280594  360803 default_sa.go:34] waiting for default service account to be created ...
	I1205 08:01:55.290048  360803 default_sa.go:45] found service account: "default"
	I1205 08:01:55.290086  360803 default_sa.go:55] duration metric: took 9.4788ms for default service account to be created ...
	I1205 08:01:55.290098  360803 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 08:01:55.380089  360803 system_pods.go:86] 8 kube-system pods found
	I1205 08:01:55.380121  360803 system_pods.go:89] "coredns-66bc5c9577-q884l" [74f41c26-9198-4f8a-ae22-52179118ca67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:55.380133  360803 system_pods.go:89] "coredns-66bc5c9577-s6ks4" [c08b86c9-b018-4ce4-b6d4-67098ccdac5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:55.380167  360803 system_pods.go:89] "etcd-bridge-183381" [f1b85224-1cb3-4122-8238-978f649c71cb] Running
	I1205 08:01:55.380174  360803 system_pods.go:89] "kube-apiserver-bridge-183381" [7d25209e-11c7-4535-aa84-2a85d8fbccba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:01:55.380181  360803 system_pods.go:89] "kube-controller-manager-bridge-183381" [b2406d16-bfb3-4942-84e6-16c8891ac2a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 08:01:55.380191  360803 system_pods.go:89] "kube-proxy-rkg4f" [b52c5121-0a6f-47be-8ad6-7bcf194fa918] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 08:01:55.380208  360803 system_pods.go:89] "kube-scheduler-bridge-183381" [366fbc0b-c997-4173-aab8-30debe96101e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 08:01:55.380218  360803 system_pods.go:89] "storage-provisioner" [b48d1889-8658-4787-9324-d52f49793eeb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:55.380243  360803 retry.go:31] will retry after 256.020985ms: missing components: kube-dns, kube-proxy
	I1205 08:01:55.561882  360803 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-183381" context rescaled to 1 replicas
	I1205 08:01:55.640888  360803 system_pods.go:86] 8 kube-system pods found
	I1205 08:01:55.640926  360803 system_pods.go:89] "coredns-66bc5c9577-q884l" [74f41c26-9198-4f8a-ae22-52179118ca67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:55.640943  360803 system_pods.go:89] "coredns-66bc5c9577-s6ks4" [c08b86c9-b018-4ce4-b6d4-67098ccdac5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:55.640949  360803 system_pods.go:89] "etcd-bridge-183381" [f1b85224-1cb3-4122-8238-978f649c71cb] Running
	I1205 08:01:55.640956  360803 system_pods.go:89] "kube-apiserver-bridge-183381" [7d25209e-11c7-4535-aa84-2a85d8fbccba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:01:55.640961  360803 system_pods.go:89] "kube-controller-manager-bridge-183381" [b2406d16-bfb3-4942-84e6-16c8891ac2a6] Running
	I1205 08:01:55.640970  360803 system_pods.go:89] "kube-proxy-rkg4f" [b52c5121-0a6f-47be-8ad6-7bcf194fa918] Running
	I1205 08:01:55.640975  360803 system_pods.go:89] "kube-scheduler-bridge-183381" [366fbc0b-c997-4173-aab8-30debe96101e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 08:01:55.640989  360803 system_pods.go:89] "storage-provisioner" [b48d1889-8658-4787-9324-d52f49793eeb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:55.641003  360803 retry.go:31] will retry after 294.651643ms: missing components: kube-dns
	I1205 08:01:55.940020  360803 system_pods.go:86] 8 kube-system pods found
	I1205 08:01:55.940055  360803 system_pods.go:89] "coredns-66bc5c9577-q884l" [74f41c26-9198-4f8a-ae22-52179118ca67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:55.940065  360803 system_pods.go:89] "coredns-66bc5c9577-s6ks4" [c08b86c9-b018-4ce4-b6d4-67098ccdac5a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:55.940070  360803 system_pods.go:89] "etcd-bridge-183381" [f1b85224-1cb3-4122-8238-978f649c71cb] Running
	I1205 08:01:55.940077  360803 system_pods.go:89] "kube-apiserver-bridge-183381" [7d25209e-11c7-4535-aa84-2a85d8fbccba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:01:55.940083  360803 system_pods.go:89] "kube-controller-manager-bridge-183381" [b2406d16-bfb3-4942-84e6-16c8891ac2a6] Running
	I1205 08:01:55.940087  360803 system_pods.go:89] "kube-proxy-rkg4f" [b52c5121-0a6f-47be-8ad6-7bcf194fa918] Running
	I1205 08:01:55.940095  360803 system_pods.go:89] "kube-scheduler-bridge-183381" [366fbc0b-c997-4173-aab8-30debe96101e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 08:01:55.940106  360803 system_pods.go:89] "storage-provisioner" [b48d1889-8658-4787-9324-d52f49793eeb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 08:01:55.940121  360803 retry.go:31] will retry after 463.550811ms: missing components: kube-dns
	I1205 08:01:56.417332  360803 system_pods.go:86] 8 kube-system pods found
	I1205 08:01:56.417377  360803 system_pods.go:89] "coredns-66bc5c9577-q884l" [74f41c26-9198-4f8a-ae22-52179118ca67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:56.417388  360803 system_pods.go:89] "coredns-66bc5c9577-s6ks4" [c08b86c9-b018-4ce4-b6d4-67098ccdac5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 08:01:56.417397  360803 system_pods.go:89] "etcd-bridge-183381" [f1b85224-1cb3-4122-8238-978f649c71cb] Running
	I1205 08:01:56.417405  360803 system_pods.go:89] "kube-apiserver-bridge-183381" [7d25209e-11c7-4535-aa84-2a85d8fbccba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 08:01:56.417410  360803 system_pods.go:89] "kube-controller-manager-bridge-183381" [b2406d16-bfb3-4942-84e6-16c8891ac2a6] Running
	I1205 08:01:56.417421  360803 system_pods.go:89] "kube-proxy-rkg4f" [b52c5121-0a6f-47be-8ad6-7bcf194fa918] Running
	I1205 08:01:56.417428  360803 system_pods.go:89] "kube-scheduler-bridge-183381" [366fbc0b-c997-4173-aab8-30debe96101e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 08:01:56.417449  360803 system_pods.go:89] "storage-provisioner" [b48d1889-8658-4787-9324-d52f49793eeb] Running
	I1205 08:01:56.417459  360803 system_pods.go:126] duration metric: took 1.127354477s to wait for k8s-apps to be running ...
	I1205 08:01:56.417471  360803 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 08:01:56.417540  360803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 08:01:56.437304  360803 system_svc.go:56] duration metric: took 19.824744ms WaitForService to wait for kubelet
	I1205 08:01:56.437336  360803 kubeadm.go:587] duration metric: took 2.731889438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 08:01:56.437362  360803 node_conditions.go:102] verifying NodePressure condition ...
	I1205 08:01:56.440483  360803 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 08:01:56.440526  360803 node_conditions.go:123] node cpu capacity is 2
	I1205 08:01:56.440539  360803 node_conditions.go:105] duration metric: took 3.170797ms to run NodePressure ...
	I1205 08:01:56.440552  360803 start.go:242] waiting for startup goroutines ...
	I1205 08:01:56.440563  360803 start.go:247] waiting for cluster config update ...
	I1205 08:01:56.440591  360803 start.go:256] writing updated cluster config ...
	I1205 08:01:56.440915  360803 ssh_runner.go:195] Run: rm -f paused
	I1205 08:01:56.445127  360803 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:01:56.449282  360803 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q884l" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 08:01:58.455799  360803 pod_ready.go:104] pod "coredns-66bc5c9577-q884l" is not "Ready", error: <nil>
	W1205 08:02:00.455943  360803 pod_ready.go:104] pod "coredns-66bc5c9577-q884l" is not "Ready", error: <nil>
	W1205 08:02:02.955049  360803 pod_ready.go:104] pod "coredns-66bc5c9577-q884l" is not "Ready", error: <nil>
	W1205 08:02:05.454666  360803 pod_ready.go:104] pod "coredns-66bc5c9577-q884l" is not "Ready", error: <nil>
	I1205 08:02:06.452430  360803 pod_ready.go:99] pod "coredns-66bc5c9577-q884l" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-q884l" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-q884l" not found
	I1205 08:02:06.452464  360803 pod_ready.go:86] duration metric: took 10.003151546s for pod "coredns-66bc5c9577-q884l" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:06.452474  360803 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s6ks4" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 08:02:08.458772  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:10.957758  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:12.958196  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:15.458402  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:17.958115  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:20.457792  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:22.458048  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:24.458195  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:26.460482  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	W1205 08:02:28.957783  360803 pod_ready.go:104] pod "coredns-66bc5c9577-s6ks4" is not "Ready", error: <nil>
	I1205 08:02:30.958586  360803 pod_ready.go:94] pod "coredns-66bc5c9577-s6ks4" is "Ready"
	I1205 08:02:30.958615  360803 pod_ready.go:86] duration metric: took 24.506133903s for pod "coredns-66bc5c9577-s6ks4" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:30.961669  360803 pod_ready.go:83] waiting for pod "etcd-bridge-183381" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:30.966372  360803 pod_ready.go:94] pod "etcd-bridge-183381" is "Ready"
	I1205 08:02:30.966401  360803 pod_ready.go:86] duration metric: took 4.705254ms for pod "etcd-bridge-183381" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:30.968766  360803 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-183381" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:30.973424  360803 pod_ready.go:94] pod "kube-apiserver-bridge-183381" is "Ready"
	I1205 08:02:30.973452  360803 pod_ready.go:86] duration metric: took 4.657959ms for pod "kube-apiserver-bridge-183381" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:30.975690  360803 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-183381" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:31.156737  360803 pod_ready.go:94] pod "kube-controller-manager-bridge-183381" is "Ready"
	I1205 08:02:31.156764  360803 pod_ready.go:86] duration metric: took 181.005071ms for pod "kube-controller-manager-bridge-183381" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:31.356838  360803 pod_ready.go:83] waiting for pod "kube-proxy-rkg4f" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:31.755858  360803 pod_ready.go:94] pod "kube-proxy-rkg4f" is "Ready"
	I1205 08:02:31.755888  360803 pod_ready.go:86] duration metric: took 399.023089ms for pod "kube-proxy-rkg4f" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:31.955809  360803 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-183381" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:32.356852  360803 pod_ready.go:94] pod "kube-scheduler-bridge-183381" is "Ready"
	I1205 08:02:32.356880  360803 pod_ready.go:86] duration metric: took 401.044755ms for pod "kube-scheduler-bridge-183381" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 08:02:32.356892  360803 pod_ready.go:40] duration metric: took 35.911687199s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 08:02:32.410570  360803 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1205 08:02:32.413829  360803 out.go:179] * Done! kubectl is now configured to use "bridge-183381" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.467880478Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.467944766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468010408Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468070889Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468137597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468209844Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468274591Z" level=info msg="runtime interface created"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468329098Z" level=info msg="created NRI interface"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468386092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468477137Z" level=info msg="Connect containerd service"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.468834006Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.469689958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.479649538Z" level=info msg="Start subscribing containerd event"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.479732066Z" level=info msg="Start recovering state"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.480037743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.480474506Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497682696Z" level=info msg="Start event monitor"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497734635Z" level=info msg="Start cni network conf syncer for default"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497745409Z" level=info msg="Start streaming server"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497758537Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497768318Z" level=info msg="runtime interface starting up..."
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497774849Z" level=info msg="starting plugins..."
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.497803961Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 07:45:10 no-preload-241270 containerd[556]: time="2025-12-05T07:45:10.498055853Z" level=info msg="containerd successfully booted in 0.055465s"
	Dec 05 07:45:10 no-preload-241270 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 08:04:52.327471   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:04:52.327953   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:04:52.329563   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:04:52.329991   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1205 08:04:52.331611   10224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.780023] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 07:22] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 07:54] hrtimer: interrupt took 15630962 ns
	
	
	==> kernel <==
	 08:04:52 up  2:47,  0 user,  load average: 1.25, 1.29, 1.42
	Linux no-preload-241270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 08:04:49 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:49 no-preload-241270 kubelet[10083]: E1205 08:04:49.392673   10083 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:04:49 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:04:49 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:04:50 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1571.
	Dec 05 08:04:50 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:50 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:50 no-preload-241270 kubelet[10089]: E1205 08:04:50.144836   10089 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:04:50 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:04:50 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:04:50 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1572.
	Dec 05 08:04:50 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:50 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:50 no-preload-241270 kubelet[10094]: E1205 08:04:50.898168   10094 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:04:50 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:04:50 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:04:51 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1573.
	Dec 05 08:04:51 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:51 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:51 no-preload-241270 kubelet[10123]: E1205 08:04:51.656133   10123 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 08:04:51 no-preload-241270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 08:04:51 no-preload-241270 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 08:04:52 no-preload-241270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1574.
	Dec 05 08:04:52 no-preload-241270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 08:04:52 no-preload-241270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241270 -n no-preload-241270: exit status 2 (329.934881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-241270" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (274.19s)

                                                
                                    

Test pass (346/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.79
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.11
9 TestDownloadOnly/v1.28.0/DeleteAll 0.27
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.34.2/json-events 8.29
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.28
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 152.26
38 TestAddons/serial/Volcano 41.7
40 TestAddons/serial/GCPAuth/Namespaces 0.23
41 TestAddons/serial/GCPAuth/FakeCredentials 8.86
44 TestAddons/parallel/Registry 17.74
45 TestAddons/parallel/RegistryCreds 1.25
46 TestAddons/parallel/Ingress 20.75
47 TestAddons/parallel/InspektorGadget 10.79
48 TestAddons/parallel/MetricsServer 6.81
50 TestAddons/parallel/CSI 52.13
51 TestAddons/parallel/Headlamp 24.96
52 TestAddons/parallel/CloudSpanner 6.59
53 TestAddons/parallel/LocalPath 10.46
54 TestAddons/parallel/NvidiaDevicePlugin 6.54
55 TestAddons/parallel/Yakd 11.87
57 TestAddons/StoppedEnableDisable 12.42
58 TestCertOptions 37.86
59 TestCertExpiration 230.93
61 TestForceSystemdFlag 35.5
62 TestForceSystemdEnv 38.43
63 TestDockerEnvContainerd 48.12
67 TestErrorSpam/setup 32.2
68 TestErrorSpam/start 0.86
69 TestErrorSpam/status 1.12
70 TestErrorSpam/pause 1.74
71 TestErrorSpam/unpause 1.74
72 TestErrorSpam/stop 1.61
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 49.06
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.03
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.58
84 TestFunctional/serial/CacheCmd/cache/add_local 1.34
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
86 TestFunctional/serial/CacheCmd/cache/list 0.05
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.15
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 40.98
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.45
95 TestFunctional/serial/LogsFileCmd 1.47
96 TestFunctional/serial/InvalidService 4.41
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 9.41
100 TestFunctional/parallel/DryRun 0.4
101 TestFunctional/parallel/InternationalLanguage 0.18
102 TestFunctional/parallel/StatusCmd 1.4
106 TestFunctional/parallel/ServiceCmdConnect 8.62
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 23.77
110 TestFunctional/parallel/SSHCmd 0.72
111 TestFunctional/parallel/CpCmd 2.35
113 TestFunctional/parallel/FileSync 0.36
114 TestFunctional/parallel/CertSync 2.19
118 TestFunctional/parallel/NodeLabels 0.13
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
122 TestFunctional/parallel/License 0.36
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
136 TestFunctional/parallel/ServiceCmd/List 0.61
137 TestFunctional/parallel/ProfileCmd/profile_list 0.55
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
140 TestFunctional/parallel/MountCmd/any-port 8.76
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
142 TestFunctional/parallel/ServiceCmd/Format 0.38
143 TestFunctional/parallel/ServiceCmd/URL 0.5
144 TestFunctional/parallel/MountCmd/specific-port 2.51
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.35
146 TestFunctional/parallel/Version/short 0.08
147 TestFunctional/parallel/Version/components 1.36
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
153 TestFunctional/parallel/ImageCommands/Setup 0.68
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
157 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.29
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.48
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.79
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.17
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.03
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.83
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.94
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.43
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.74
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.16
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.31
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.68
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.59
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.27
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.38
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.96
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.26
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.66
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.25
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.1
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.05
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.34
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.32
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.48
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.66
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.37
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.14
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 161.94
265 TestMultiControlPlane/serial/DeployApp 7.5
266 TestMultiControlPlane/serial/PingHostFromPods 1.61
267 TestMultiControlPlane/serial/AddWorkerNode 58.73
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
270 TestMultiControlPlane/serial/CopyFile 20.32
271 TestMultiControlPlane/serial/StopSecondaryNode 13.02
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.24
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.14
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 102.54
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.31
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
278 TestMultiControlPlane/serial/StopCluster 36.35
279 TestMultiControlPlane/serial/RestartCluster 60.78
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.85
281 TestMultiControlPlane/serial/AddSecondaryNode 74.29
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.06
287 TestJSONOutput/start/Command 81.61
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.71
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.61
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.08
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 40.96
313 TestKicCustomNetwork/use_default_bridge_network 35.34
314 TestKicExistingNetwork 36
315 TestKicCustomSubnet 35.24
316 TestKicStaticIP 36.87
317 TestMainNoArgs 0.07
318 TestMinikubeProfile 69.37
321 TestMountStart/serial/StartWithMountFirst 8.34
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.43
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.73
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 7.45
329 TestMountStart/serial/VerifyMountPostStop 0.26
332 TestMultiNode/serial/FreshStart2Nodes 107.97
333 TestMultiNode/serial/DeployApp2Nodes 6.7
334 TestMultiNode/serial/PingHostFrom2Pods 0.99
335 TestMultiNode/serial/AddNode 27.29
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.75
338 TestMultiNode/serial/CopyFile 10.76
339 TestMultiNode/serial/StopNode 2.4
340 TestMultiNode/serial/StartAfterStop 8.13
341 TestMultiNode/serial/RestartKeepsNodes 80.8
342 TestMultiNode/serial/DeleteNode 5.74
343 TestMultiNode/serial/StopMultiNode 24.17
344 TestMultiNode/serial/RestartMultiNode 56.87
345 TestMultiNode/serial/ValidateNameConflict 37.94
350 TestPreload 120.52
352 TestScheduledStopUnix 109.88
355 TestInsufficientStorage 12.43
356 TestRunningBinaryUpgrade 328.72
359 TestMissingContainerUpgrade 177.85
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 52.6
363 TestNoKubernetes/serial/StartWithStopK8s 8.91
364 TestNoKubernetes/serial/Start 8.8
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
367 TestNoKubernetes/serial/ProfileList 1.47
368 TestNoKubernetes/serial/Stop 1.49
369 TestNoKubernetes/serial/StartNoArgs 6.57
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
371 TestStoppedBinaryUpgrade/Setup 1.34
372 TestStoppedBinaryUpgrade/Upgrade 301.93
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.19
382 TestPause/serial/Start 82.97
383 TestPause/serial/SecondStartNoReconfiguration 8.15
384 TestPause/serial/Pause 1.08
385 TestPause/serial/VerifyStatus 0.44
386 TestPause/serial/Unpause 0.88
390 TestPause/serial/PauseAgain 1.17
391 TestPause/serial/DeletePaused 3.16
392 TestPause/serial/VerifyDeletedResources 0.21
397 TestNetworkPlugins/group/false 5.29
402 TestStartStop/group/old-k8s-version/serial/FirstStart 63.16
403 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
405 TestStartStop/group/old-k8s-version/serial/Stop 12.13
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
407 TestStartStop/group/old-k8s-version/serial/SecondStart 53.71
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
410 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
411 TestStartStop/group/old-k8s-version/serial/Pause 3.86
413 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.53
415 TestStartStop/group/embed-certs/serial/FirstStart 86.32
416 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.38
417 TestStartStop/group/embed-certs/serial/DeployApp 9.37
418 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
419 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
420 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
421 TestStartStop/group/embed-certs/serial/Stop 12.09
422 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
423 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.66
424 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
425 TestStartStop/group/embed-certs/serial/SecondStart 56.47
426 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
427 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
428 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
429 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.25
430 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
431 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.97
432 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
433 TestStartStop/group/embed-certs/serial/Pause 4.7
440 TestStartStop/group/newest-cni/serial/DeployApp 0
442 TestStartStop/group/no-preload/serial/Stop 1.31
443 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
445 TestStartStop/group/newest-cni/serial/Stop 1.3
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
453 TestNetworkPlugins/group/auto/Start 80.7
454 TestNetworkPlugins/group/auto/KubeletFlags 0.34
455 TestNetworkPlugins/group/auto/NetCatPod 9.27
456 TestNetworkPlugins/group/auto/DNS 0.19
457 TestNetworkPlugins/group/auto/Localhost 0.16
458 TestNetworkPlugins/group/auto/HairPin 0.26
459 TestNetworkPlugins/group/kindnet/Start 49.23
460 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
462 TestNetworkPlugins/group/kindnet/NetCatPod 9.3
463 TestNetworkPlugins/group/kindnet/DNS 0.18
464 TestNetworkPlugins/group/kindnet/Localhost 0.15
465 TestNetworkPlugins/group/kindnet/HairPin 0.18
466 TestNetworkPlugins/group/calico/Start 64.41
467 TestNetworkPlugins/group/calico/ControllerPod 6.01
468 TestNetworkPlugins/group/calico/KubeletFlags 0.35
469 TestNetworkPlugins/group/calico/NetCatPod 9.32
470 TestNetworkPlugins/group/calico/DNS 0.16
471 TestNetworkPlugins/group/calico/Localhost 0.16
472 TestNetworkPlugins/group/calico/HairPin 0.15
473 TestNetworkPlugins/group/custom-flannel/Start 56.72
474 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
475 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.33
476 TestNetworkPlugins/group/custom-flannel/DNS 0.37
477 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
478 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
479 TestNetworkPlugins/group/enable-default-cni/Start 44.55
480 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
481 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
482 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
483 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
484 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
485 TestNetworkPlugins/group/flannel/Start 62.09
487 TestNetworkPlugins/group/flannel/ControllerPod 6
488 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
489 TestNetworkPlugins/group/flannel/NetCatPod 10.24
490 TestNetworkPlugins/group/flannel/DNS 0.17
491 TestNetworkPlugins/group/flannel/Localhost 0.15
492 TestNetworkPlugins/group/flannel/HairPin 0.14
493 TestNetworkPlugins/group/bridge/Start 74.04
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
495 TestNetworkPlugins/group/bridge/NetCatPod 8.29
496 TestNetworkPlugins/group/bridge/DNS 0.17
497 TestNetworkPlugins/group/bridge/Localhost 0.16
498 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (9.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-824930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-824930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.79351076s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1205 06:05:15.767917    4192 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1205 06:05:15.767993    4192 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-824930
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-824930: exit status 85 (108.935672ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-824930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-824930 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:06
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:06.020917    4197 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:05:06.021182    4197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:06.021211    4197 out.go:374] Setting ErrFile to fd 2...
	I1205 06:05:06.021230    4197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:06.021563    4197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	W1205 06:05:06.021754    4197 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-2385/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-2385/.minikube/config/config.json: no such file or directory
	I1205 06:05:06.022310    4197 out.go:368] Setting JSON to true
	I1205 06:05:06.023166    4197 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2853,"bootTime":1764911853,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:05:06.023276    4197 start.go:143] virtualization:  
	I1205 06:05:06.027840    4197 out.go:99] [download-only-824930] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1205 06:05:06.028065    4197 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 06:05:06.028170    4197 notify.go:221] Checking for updates...
	I1205 06:05:06.029587    4197 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:05:06.031311    4197 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:05:06.033837    4197 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:05:06.035359    4197 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:05:06.037278    4197 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 06:05:06.042154    4197 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:05:06.042466    4197 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:05:06.066426    4197 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:05:06.066538    4197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:06.472369    4197 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-05 06:05:06.459040854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:05:06.472491    4197 docker.go:319] overlay module found
	I1205 06:05:06.473831    4197 out.go:99] Using the docker driver based on user configuration
	I1205 06:05:06.473868    4197 start.go:309] selected driver: docker
	I1205 06:05:06.473875    4197 start.go:927] validating driver "docker" against <nil>
	I1205 06:05:06.473982    4197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:06.540141    4197 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-05 06:05:06.53059082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:05:06.540299    4197 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:05:06.540573    4197 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1205 06:05:06.540752    4197 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:05:06.542267    4197 out.go:171] Using Docker driver with root privileges
	I1205 06:05:06.543480    4197 cni.go:84] Creating CNI manager for ""
	I1205 06:05:06.543548    4197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:05:06.543562    4197 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:05:06.543634    4197 start.go:353] cluster config:
	{Name:download-only-824930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-824930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:05:06.544949    4197 out.go:99] Starting "download-only-824930" primary control-plane node in "download-only-824930" cluster
	I1205 06:05:06.544969    4197 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:05:06.546201    4197 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:05:06.546235    4197 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1205 06:05:06.546376    4197 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:05:06.562281    4197 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:05:06.562481    4197 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1205 06:05:06.562598    4197 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:05:06.602529    4197 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1205 06:05:06.602579    4197 cache.go:65] Caching tarball of preloaded images
	I1205 06:05:06.602775    4197 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1205 06:05:06.604458    4197 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1205 06:05:06.604489    4197 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1205 06:05:06.693469    4197 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1205 06:05:06.693604    4197 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-824930 host does not exist
	  To start a cluster, run: "minikube start -p download-only-824930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-824930
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (8.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-619209 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-619209 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.287584251s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (8.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1205 06:05:24.664536    4192 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1205 06:05:24.664574    4192 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-619209
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-619209: exit status 85 (90.838456ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-824930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-824930 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-824930                                                                                                                                                               │ download-only-824930 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-619209 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-619209 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:16.415963    4393 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:05:16.416104    4393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:16.416117    4393 out.go:374] Setting ErrFile to fd 2...
	I1205 06:05:16.416137    4393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:16.416419    4393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:05:16.416849    4393 out.go:368] Setting JSON to true
	I1205 06:05:16.417619    4393 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2863,"bootTime":1764911853,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:05:16.417687    4393 start.go:143] virtualization:  
	I1205 06:05:16.448157    4393 out.go:99] [download-only-619209] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:05:16.448475    4393 notify.go:221] Checking for updates...
	I1205 06:05:16.471733    4393 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:05:16.493826    4393 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:05:16.517482    4393 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:05:16.537910    4393 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:05:16.560568    4393 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 06:05:16.607842    4393 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:05:16.608102    4393 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:05:16.628315    4393 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:05:16.628433    4393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:16.703946    4393 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:05:16.694712924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:05:16.704049    4393 docker.go:319] overlay module found
	I1205 06:05:16.712243    4393 out.go:99] Using the docker driver based on user configuration
	I1205 06:05:16.712290    4393 start.go:309] selected driver: docker
	I1205 06:05:16.712301    4393 start.go:927] validating driver "docker" against <nil>
	I1205 06:05:16.712416    4393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:16.780351    4393 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:05:16.770415094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:05:16.780505    4393 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:05:16.780769    4393 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1205 06:05:16.780915    4393 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:05:16.792453    4393 out.go:171] Using Docker driver with root privileges
	I1205 06:05:16.804086    4393 cni.go:84] Creating CNI manager for ""
	I1205 06:05:16.805229    4393 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 06:05:16.805253    4393 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:05:16.805337    4393 start.go:353] cluster config:
	{Name:download-only-619209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-619209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:05:16.817019    4393 out.go:99] Starting "download-only-619209" primary control-plane node in "download-only-619209" cluster
	I1205 06:05:16.817057    4393 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1205 06:05:16.833141    4393 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:05:16.833209    4393 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 06:05:16.833256    4393 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:05:16.849887    4393 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:05:16.850016    4393 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1205 06:05:16.850034    4393 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1205 06:05:16.850038    4393 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1205 06:05:16.850045    4393 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1205 06:05:16.888233    4393 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1205 06:05:16.888270    4393 cache.go:65] Caching tarball of preloaded images
	I1205 06:05:16.888439    4393 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1205 06:05:16.915887    4393 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1205 06:05:16.915920    4393 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1205 06:05:17.009674    4393 preload.go:295] Got checksum from GCS API "cd1a05d5493c9270e248bf47fb3f071d"
	I1205 06:05:17.009729    4393 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:cd1a05d5493c9270e248bf47fb3f071d -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-619209 host does not exist
	  To start a cluster, run: "minikube start -p download-only-619209"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-619209
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-449806 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-449806 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (2.280618191s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-449806
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-449806: exit status 85 (86.209474ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-824930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-824930 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-824930                                                                                                                                                                      │ download-only-824930 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-619209 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-619209 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-619209                                                                                                                                                                      │ download-only-619209 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-449806 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-449806 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:25.156599    4589 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:05:25.156770    4589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:25.156798    4589 out.go:374] Setting ErrFile to fd 2...
	I1205 06:05:25.156819    4589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:25.157070    4589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:05:25.157516    4589 out.go:368] Setting JSON to true
	I1205 06:05:25.158328    4589 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2872,"bootTime":1764911853,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:05:25.158425    4589 start.go:143] virtualization:  
	I1205 06:05:25.161785    4589 out.go:99] [download-only-449806] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:05:25.162015    4589 notify.go:221] Checking for updates...
	I1205 06:05:25.164916    4589 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:05:25.167948    4589 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:05:25.171017    4589 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:05:25.173981    4589 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:05:25.177015    4589 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 06:05:25.182779    4589 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:05:25.183029    4589 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:05:25.215411    4589 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:05:25.215531    4589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:25.281644    4589 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:05:25.272646801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:05:25.281756    4589 docker.go:319] overlay module found
	I1205 06:05:25.284823    4589 out.go:99] Using the docker driver based on user configuration
	I1205 06:05:25.284849    4589 start.go:309] selected driver: docker
	I1205 06:05:25.284855    4589 start.go:927] validating driver "docker" against <nil>
	I1205 06:05:25.284952    4589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:05:25.337638    4589 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:05:25.328348696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:05:25.337788    4589 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:05:25.338095    4589 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1205 06:05:25.338242    4589 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:05:25.341333    4589 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-449806 host does not exist
	  To start a cluster, run: "minikube start -p download-only-449806"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-449806
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 06:05:28.826586    4192 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-208962 --alsologtostderr --binary-mirror http://127.0.0.1:46553 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-208962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-208962
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-683092
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-683092: exit status 85 (64.314742ms)

                                                
                                                
-- stdout --
	* Profile "addons-683092" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-683092"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-683092
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-683092: exit status 85 (68.2509ms)

                                                
                                                
-- stdout --
	* Profile "addons-683092" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-683092"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (152.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-683092 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-683092 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m32.25495278s)
--- PASS: TestAddons/Setup (152.26s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.7s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 51.306804ms
addons_test.go:876: volcano-admission stabilized in 51.358473ms
addons_test.go:868: volcano-scheduler stabilized in 51.427577ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-2kq7t" [dc33a13b-beb3-4d4b-a9f2-4acabd473fe2] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004061111s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-nkxf5" [d3aaafb3-5f34-4551-b27d-28f40fa14b48] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003954992s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-pjnpq" [29dafe1e-5093-4393-9ac7-7e32ecf4004d] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003704636s
addons_test.go:903: (dbg) Run:  kubectl --context addons-683092 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-683092 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-683092 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [ba607f43-dca4-42e7-86cf-4bb4eca1d440] Pending
helpers_test.go:352: "test-job-nginx-0" [ba607f43-dca4-42e7-86cf-4bb4eca1d440] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [ba607f43-dca4-42e7-86cf-4bb4eca1d440] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003719165s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-683092 addons disable volcano --alsologtostderr -v=1: (11.965401116s)
--- PASS: TestAddons/serial/Volcano (41.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-683092 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-683092 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-683092 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-683092 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [700180ef-3162-45f4-a653-c4f725051baf] Pending
helpers_test.go:352: "busybox" [700180ef-3162-45f4-a653-c4f725051baf] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003808142s
addons_test.go:694: (dbg) Run:  kubectl --context addons-683092 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-683092 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-683092 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-683092 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.122639ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-rsgsl" [65aec780-e0d6-4488-a0fb-937fe0f477b0] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003692315s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-5msp8" [557f05f2-d484-45e9-b185-49fae1f50a6c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003250096s
addons_test.go:392: (dbg) Run:  kubectl --context addons-683092 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-683092 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-683092 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.560047002s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 ip
2025/12/05 06:09:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.74s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.25s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.07568ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-683092
addons_test.go:332: (dbg) Run:  kubectl --context addons-683092 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-683092 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-683092 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-683092 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [5babe56b-7143-4ad7-b718-ab2898f2c876] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [5babe56b-7143-4ad7-b718-ab2898f2c876] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003339923s
I1205 06:09:53.819405    4192 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-683092 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-683092 addons disable ingress-dns --alsologtostderr -v=1: (1.915774194s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-683092 addons disable ingress --alsologtostderr -v=1: (7.861206648s)
--- PASS: TestAddons/parallel/Ingress (20.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-zhgfq" [adc13602-a7ea-4e73-861e-c793cc9d9b0a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003787516s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-683092 addons disable inspektor-gadget --alsologtostderr -v=1: (5.784031043s)
--- PASS: TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.059122ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wrtx6" [c712b9e3-d22c-4808-bcae-898cba7c6e8a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003023196s
addons_test.go:463: (dbg) Run:  kubectl --context addons-683092 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1205 06:09:19.388364    4192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 06:09:19.391791    4192 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 06:09:19.391817    4192 kapi.go:107] duration metric: took 5.576044ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.585881ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-683092 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-683092 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [9a19c9e6-aa97-43b5-ab47-11d28c0c0c38] Pending
helpers_test.go:352: "task-pv-pod" [9a19c9e6-aa97-43b5-ab47-11d28c0c0c38] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003321709s
addons_test.go:572: (dbg) Run:  kubectl --context addons-683092 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-683092 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-683092 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-683092 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-683092 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-683092 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-683092 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b5272b42-a7d3-421b-a3ee-0143fe33783e] Pending
helpers_test.go:352: "task-pv-pod-restore" [b5272b42-a7d3-421b-a3ee-0143fe33783e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b5272b42-a7d3-421b-a3ee-0143fe33783e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004200121s
addons_test.go:614: (dbg) Run:  kubectl --context addons-683092 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-683092 delete pod task-pv-pod-restore: (1.125570856s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-683092 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-683092 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-683092 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.056662672s)
--- PASS: TestAddons/parallel/CSI (52.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (24.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-683092 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-683092 --alsologtostderr -v=1: (1.001443143s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-q66s4" [8b148649-9203-4371-a9dc-82251295c200] Pending
helpers_test.go:352: "headlamp-dfcdc64b-q66s4" [8b148649-9203-4371-a9dc-82251295c200] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-q66s4" [8b148649-9203-4371-a9dc-82251295c200] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.003643812s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-683092 addons disable headlamp --alsologtostderr -v=1: (5.954862834s)
--- PASS: TestAddons/parallel/Headlamp (24.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-2jw5t" [7db88e73-f68f-470f-a708-32a91fa9195a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003036816s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-683092 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-683092 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-683092 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [033f8c9e-c503-463d-ba9c-fa785c85fda9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [033f8c9e-c503-463d-ba9c-fa785c85fda9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [033f8c9e-c503-463d-ba9c-fa785c85fda9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00409549s
addons_test.go:967: (dbg) Run:  kubectl --context addons-683092 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 ssh "cat /opt/local-path-provisioner/pvc-799d8f38-4cfa-48f4-ab5f-1df6c6c9fb26_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-683092 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-683092 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-595m2" [3d755edc-44a2-4bfc-aa2a-9d0bbcfe7c3c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003789358s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-x2jcx" [e4c023fe-948b-43fe-8c0a-602e14373934] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003910825s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-683092 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-683092 addons disable yakd --alsologtostderr -v=1: (5.861988893s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-683092
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-683092: (12.108936835s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-683092
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-683092
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-683092
--- PASS: TestAddons/StoppedEnableDisable (12.42s)

                                                
                                    
x
+
TestCertOptions (37.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-461373 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-461373 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.744874445s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-461373 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-461373 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-461373 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-461373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-461373
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-461373: (2.155751227s)
--- PASS: TestCertOptions (37.86s)

                                                
                                    
x
+
TestCertExpiration (230.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-379442 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1205 07:28:01.797181    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-379442 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.538627671s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-379442 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-379442 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.940759144s)
helpers_test.go:175: Cleaning up "cert-expiration-379442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-379442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-379442: (3.445591053s)
--- PASS: TestCertExpiration (230.93s)

                                                
                                    
x
+
TestForceSystemdFlag (35.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-002201 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1205 07:27:16.967662    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-002201 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.680185882s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-002201 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-002201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-002201
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-002201: (2.500077628s)
--- PASS: TestForceSystemdFlag (35.50s)

                                                
                                    
x
+
TestForceSystemdEnv (38.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-788551 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-788551 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.542608055s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-788551 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-788551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-788551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-788551: (2.569632743s)
--- PASS: TestForceSystemdEnv (38.43s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.12s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-379425 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-379425 --driver=docker  --container-runtime=containerd: (32.339770199s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-379425"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-379425": (1.076114681s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cUn862ouA5Fp/agent.23555" SSH_AGENT_PID="23556" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cUn862ouA5Fp/agent.23555" SSH_AGENT_PID="23556" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cUn862ouA5Fp/agent.23555" SSH_AGENT_PID="23556" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.256068193s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-cUn862ouA5Fp/agent.23555" SSH_AGENT_PID="23556" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-379425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-379425
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-379425: (2.04026004s)
--- PASS: TestDockerEnvContainerd (48.12s)

                                                
                                    
x
+
TestErrorSpam/setup (32.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-899707 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-899707 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-899707 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-899707 --driver=docker  --container-runtime=containerd: (32.196294281s)
--- PASS: TestErrorSpam/setup (32.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 stop: (1.42296024s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-899707 --log_dir /tmp/nospam-899707 stop
--- PASS: TestErrorSpam/stop (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-226068 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1205 06:13:01.799726    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:01.806247    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:01.817713    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:01.839102    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:01.880427    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:01.961851    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:02.123435    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:02.445252    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:03.087303    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:04.368753    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:06.930531    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-226068 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (49.057024227s)
--- PASS: TestFunctional/serial/StartWithProxy (49.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 06:13:08.975236    4192 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-226068 --alsologtostderr -v=8
E1205 06:13:12.052527    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-226068 --alsologtostderr -v=8: (7.028571495s)
functional_test.go:678: soft start took 7.031722893s for "functional-226068" cluster.
I1205 06:13:16.004158    4192 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-226068 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-226068 cache add registry.k8s.io/pause:3.1: (1.367834864s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-226068 cache add registry.k8s.io/pause:3.3: (1.155493268s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-226068 cache add registry.k8s.io/pause:latest: (1.053458977s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-226068 /tmp/TestFunctionalserialCacheCmdcacheadd_local3286305779/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cache add minikube-local-cache-test:functional-226068
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cache delete minikube-local-cache-test:functional-226068
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-226068
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.878777ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cache reload
E1205 06:13:22.294910    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 kubectl -- --context functional-226068 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-226068 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-226068 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 06:13:42.776335    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-226068 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.977338588s)
functional_test.go:776: restart took 40.977435524s for "functional-226068" cluster.
I1205 06:14:04.735007    4192 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (40.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-226068 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-226068 logs: (1.44534681s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 logs --file /tmp/TestFunctionalserialLogsFileCmd3900743415/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-226068 logs --file /tmp/TestFunctionalserialLogsFileCmd3900743415/001/logs.txt: (1.470539577s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-226068 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-226068
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-226068: exit status 115 (619.209553ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30892 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-226068 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 config get cpus: exit status 14 (71.973205ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 config get cpus: exit status 14 (81.357326ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-226068 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-226068 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 38565: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-226068 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-226068 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (175.355704ms)

                                                
                                                
-- stdout --
	* [functional-226068] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:43.004366   38314 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:43.004666   38314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:43.004695   38314 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:43.004718   38314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:43.005108   38314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:14:43.005684   38314 out.go:368] Setting JSON to false
	I1205 06:14:43.006795   38314 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3430,"bootTime":1764911853,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:14:43.006912   38314 start.go:143] virtualization:  
	I1205 06:14:43.008809   38314 out.go:179] * [functional-226068] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:14:43.010540   38314 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:14:43.010630   38314 notify.go:221] Checking for updates...
	I1205 06:14:43.013371   38314 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:14:43.014680   38314 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:14:43.015958   38314 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:14:43.016995   38314 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:14:43.018181   38314 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:14:43.020095   38314 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 06:14:43.020672   38314 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:14:43.046814   38314 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:14:43.046922   38314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:14:43.120570   38314 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 06:14:43.106614674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:14:43.120674   38314 docker.go:319] overlay module found
	I1205 06:14:43.122411   38314 out.go:179] * Using the docker driver based on existing profile
	I1205 06:14:43.124034   38314 start.go:309] selected driver: docker
	I1205 06:14:43.124057   38314 start.go:927] validating driver "docker" against &{Name:functional-226068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-226068 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:14:43.124166   38314 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:14:43.126233   38314 out.go:203] 
	W1205 06:14:43.127369   38314 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:14:43.128769   38314 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-226068 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-226068 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-226068 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (180.680159ms)

                                                
                                                
-- stdout --
	* [functional-226068] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:42.840791   38266 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:42.840919   38266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:42.840928   38266 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:42.840933   38266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:42.841781   38266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:14:42.842197   38266 out.go:368] Setting JSON to false
	I1205 06:14:42.843119   38266 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3430,"bootTime":1764911853,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:14:42.843194   38266 start.go:143] virtualization:  
	I1205 06:14:42.844835   38266 out.go:179] * [functional-226068] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1205 06:14:42.846393   38266 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:14:42.846533   38266 notify.go:221] Checking for updates...
	I1205 06:14:42.850207   38266 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:14:42.851716   38266 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:14:42.853401   38266 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:14:42.854550   38266 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:14:42.855786   38266 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:14:42.857563   38266 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 06:14:42.858168   38266 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:14:42.882836   38266 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:14:42.882959   38266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:14:42.944913   38266 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 06:14:42.93579233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:14:42.945026   38266 docker.go:319] overlay module found
	I1205 06:14:42.946765   38266 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1205 06:14:42.947904   38266 start.go:309] selected driver: docker
	I1205 06:14:42.947920   38266 start.go:927] validating driver "docker" against &{Name:functional-226068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-226068 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:14:42.948075   38266 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:14:42.950593   38266 out.go:203] 
	W1205 06:14:42.952955   38266 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:14:42.954067   38266 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-226068 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-226068 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-vwprs" [a451c6c8-b833-4cf7-9b1a-b07597e39a54] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1205 06:14:23.738025    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "hello-node-connect-7d85dfc575-vwprs" [a451c6c8-b833-4cf7-9b1a-b07597e39a54] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003175247s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31874
functional_test.go:1680: http://192.168.49.2:31874: success! body:
Request served by hello-node-connect-7d85dfc575-vwprs

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31874
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [0cd6db66-ddd9-4edd-9b75-d9bd3f71f62d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003428199s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-226068 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-226068 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-226068 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-226068 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4f959467-19c2-477a-bd4f-829b22a317ee] Pending
helpers_test.go:352: "sp-pod" [4f959467-19c2-477a-bd4f-829b22a317ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4f959467-19c2-477a-bd4f-829b22a317ee] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003865032s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-226068 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-226068 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-226068 delete -f testdata/storage-provisioner/pod.yaml: (1.71312808s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-226068 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [51a0599a-0401-4530-bf24-285a839754f7] Pending
helpers_test.go:352: "sp-pod" [51a0599a-0401-4530-bf24-285a839754f7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003365536s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-226068 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh -n functional-226068 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cp functional-226068:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3107436906/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh -n functional-226068 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh -n functional-226068 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4192/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo cat /etc/test/nested/copy/4192/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4192.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo cat /etc/ssl/certs/4192.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4192.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo cat /usr/share/ca-certificates/4192.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo cat /etc/ssl/certs/41922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo cat /usr/share/ca-certificates/41922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-226068 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 ssh "sudo systemctl is-active docker": exit status 1 (341.396473ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 ssh "sudo systemctl is-active crio": exit status 1 (294.043017ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-226068 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-226068 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-226068 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-226068 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 35908: os: process already finished
helpers_test.go:519: unable to terminate pid 35715: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-226068 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-226068 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [ddfcd7e4-a27a-4957-8d8e-8e5b21e80939] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [ddfcd7e4-a27a-4957-8d8e-8e5b21e80939] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003398341s
I1205 06:14:22.480817    4192 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-226068 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.157.34 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-226068 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-226068 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-226068 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-p4cf8" [2b44469f-c14b-4020-bcac-75b9b5cad798] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-p4cf8" [2b44469f-c14b-4020-bcac-75b9b5cad798] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004219363s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "493.156905ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.659921ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "479.659222ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "65.632371ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 service list -o json
functional_test.go:1504: Took "610.881117ms" to run "out/minikube-linux-arm64 -p functional-226068 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdany-port2669506208/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764915279843267133" to /tmp/TestFunctionalparallelMountCmdany-port2669506208/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764915279843267133" to /tmp/TestFunctionalparallelMountCmdany-port2669506208/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764915279843267133" to /tmp/TestFunctionalparallelMountCmdany-port2669506208/001/test-1764915279843267133
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (432.282059ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:14:40.276561    4192 retry.go:31] will retry after 622.460859ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 06:14 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 06:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 06:14 test-1764915279843267133
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh cat /mount-9p/test-1764915279843267133
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-226068 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [fa0ed638-b89c-45ac-ac22-2a71b68c187b] Pending
helpers_test.go:352: "busybox-mount" [fa0ed638-b89c-45ac-ac22-2a71b68c187b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [fa0ed638-b89c-45ac-ac22-2a71b68c187b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [fa0ed638-b89c-45ac-ac22-2a71b68c187b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003582845s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-226068 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdany-port2669506208/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30760
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30760
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdspecific-port3447303452/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (594.377842ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:14:49.193717    4192 retry.go:31] will retry after 627.035117ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdspecific-port3447303452/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 ssh "sudo umount -f /mount-9p": exit status 1 (393.35374ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-226068 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdspecific-port3447303452/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1102108759/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1102108759/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1102108759/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T" /mount1: exit status 1 (635.110921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:14:51.748795    4192 retry.go:31] will retry after 686.652304ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T" /mount1
2025/12/05 06:14:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-226068 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1102108759/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1102108759/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-226068 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1102108759/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-226068 version -o=json --components: (1.356381765s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-226068 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-226068
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-226068
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-226068 image ls --format short --alsologtostderr:
I1205 06:15:00.659928   41449 out.go:360] Setting OutFile to fd 1 ...
I1205 06:15:00.660154   41449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:00.660186   41449 out.go:374] Setting ErrFile to fd 2...
I1205 06:15:00.660208   41449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:00.660663   41449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:15:00.661848   41449 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:00.662074   41449 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:00.668008   41449 cli_runner.go:164] Run: docker container inspect functional-226068 --format={{.State.Status}}
I1205 06:15:00.698440   41449 ssh_runner.go:195] Run: systemctl --version
I1205 06:15:00.698510   41449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-226068
I1205 06:15:00.718811   41449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-226068/id_rsa Username:docker}
I1205 06:15:00.829465   41449 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-226068 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ docker.io/kicbase/echo-server               │ functional-226068  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/minikube-local-cache-test │ functional-226068  │ sha256:da1050 │ 990B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-226068 image ls --format table --alsologtostderr:
I1205 06:15:00.968890   41526 out.go:360] Setting OutFile to fd 1 ...
I1205 06:15:00.969116   41526 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:00.969149   41526 out.go:374] Setting ErrFile to fd 2...
I1205 06:15:00.969187   41526 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:00.969486   41526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:15:00.970168   41526 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:00.970418   41526 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:00.971056   41526 cli_runner.go:164] Run: docker container inspect functional-226068 --format={{.State.Status}}
I1205 06:15:00.994647   41526 ssh_runner.go:195] Run: systemctl --version
I1205 06:15:00.994702   41526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-226068
I1205 06:15:01.029329   41526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-226068/id_rsa Username:docker}
I1205 06:15:01.143153   41526 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-226068 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserv
er:v1.34.2"],"size":"24559643"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c154
06a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:da10500e63c801b54da78f8674131cdf4c08048aa0546512b5c303fbd1d46fc4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-226068"],"size":"990"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":
["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-226068"],"size":"2173567"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],
"size":"21136588"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-226068 image ls --format json --alsologtostderr:
I1205 06:15:00.951145   41522 out.go:360] Setting OutFile to fd 1 ...
I1205 06:15:00.951326   41522 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:00.951339   41522 out.go:374] Setting ErrFile to fd 2...
I1205 06:15:00.951346   41522 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:00.952365   41522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:15:00.953425   41522 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:00.953795   41522 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:00.954382   41522 cli_runner.go:164] Run: docker container inspect functional-226068 --format={{.State.Status}}
I1205 06:15:00.982440   41522 ssh_runner.go:195] Run: systemctl --version
I1205 06:15:00.982515   41522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-226068
I1205 06:15:01.003944   41522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-226068/id_rsa Username:docker}
I1205 06:15:01.112270   41522 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-226068 image ls --format yaml --alsologtostderr:
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-226068
size: "2173567"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:da10500e63c801b54da78f8674131cdf4c08048aa0546512b5c303fbd1d46fc4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-226068
size: "990"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-226068 image ls --format yaml --alsologtostderr:
I1205 06:15:00.666307   41450 out.go:360] Setting OutFile to fd 1 ...
I1205 06:15:00.666609   41450 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:00.666634   41450 out.go:374] Setting ErrFile to fd 2...
I1205 06:15:00.666653   41450 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:00.666993   41450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:15:00.667761   41450 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:00.668375   41450 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:00.669067   41450 cli_runner.go:164] Run: docker container inspect functional-226068 --format={{.State.Status}}
I1205 06:15:00.696362   41450 ssh_runner.go:195] Run: systemctl --version
I1205 06:15:00.696424   41450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-226068
I1205 06:15:00.718995   41450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-226068/id_rsa Username:docker}
I1205 06:15:00.829367   41450 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-226068 ssh pgrep buildkitd: exit status 1 (300.62364ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr: (3.398571506s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr:
I1205 06:15:01.496192   41661 out.go:360] Setting OutFile to fd 1 ...
I1205 06:15:01.496413   41661 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:01.496456   41661 out.go:374] Setting ErrFile to fd 2...
I1205 06:15:01.496481   41661 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:01.496867   41661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:15:01.497950   41661 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:01.501400   41661 config.go:182] Loaded profile config "functional-226068": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1205 06:15:01.502066   41661 cli_runner.go:164] Run: docker container inspect functional-226068 --format={{.State.Status}}
I1205 06:15:01.521641   41661 ssh_runner.go:195] Run: systemctl --version
I1205 06:15:01.521710   41661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-226068
I1205 06:15:01.542184   41661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-226068/id_rsa Username:docker}
I1205 06:15:01.651051   41661 build_images.go:162] Building image from path: /tmp/build.3893102840.tar
I1205 06:15:01.651149   41661 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:15:01.659612   41661 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3893102840.tar
I1205 06:15:01.663817   41661 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3893102840.tar: stat -c "%s %y" /var/lib/minikube/build/build.3893102840.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3893102840.tar': No such file or directory
I1205 06:15:01.663847   41661 ssh_runner.go:362] scp /tmp/build.3893102840.tar --> /var/lib/minikube/build/build.3893102840.tar (3072 bytes)
I1205 06:15:01.683844   41661 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3893102840
I1205 06:15:01.693930   41661 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3893102840 -xf /var/lib/minikube/build/build.3893102840.tar
I1205 06:15:01.702285   41661 containerd.go:394] Building image: /var/lib/minikube/build/build.3893102840
I1205 06:15:01.702359   41661 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3893102840 --local dockerfile=/var/lib/minikube/build/build.3893102840 --output type=image,name=localhost/my-image:functional-226068
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:16c9f03d67f91b6df029f36a2d8867207bd4a4e626d4211c1c2dae75178299a2
#8 exporting manifest sha256:16c9f03d67f91b6df029f36a2d8867207bd4a4e626d4211c1c2dae75178299a2 0.0s done
#8 exporting config sha256:b8f518161593e59f9044abd0e7bdd7f2967e7fdcbcc5fbf578bd3560c8f91e61 0.0s done
#8 naming to localhost/my-image:functional-226068 done
#8 DONE 0.2s
I1205 06:15:04.826243   41661 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3893102840 --local dockerfile=/var/lib/minikube/build/build.3893102840 --output type=image,name=localhost/my-image:functional-226068: (3.12385616s)
I1205 06:15:04.826332   41661 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3893102840
I1205 06:15:04.835077   41661 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3893102840.tar
I1205 06:15:04.842444   41661 build_images.go:218] Built localhost/my-image:functional-226068 from /tmp/build.3893102840.tar
I1205 06:15:04.842525   41661 build_images.go:134] succeeded building to: functional-226068
I1205 06:15:04.842537   41661 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-226068
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr: (1.093719839s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-226068
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image save kicbase/echo-server:functional-226068 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image rm kicbase/echo-server:functional-226068 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-226068
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-226068 image save --daemon kicbase/echo-server:functional-226068 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-226068
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-226068
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-226068
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-226068
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-101526 cache add registry.k8s.io/pause:3.1: (1.101775044s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-101526 cache add registry.k8s.io/pause:3.3: (1.099829549s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1173882131/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cache add minikube-local-cache-test:functional-101526
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cache delete minikube-local-cache-test:functional-101526
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-101526
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.416692ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2217001887/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 config get cpus: exit status 14 (69.311902ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 config get cpus: exit status 14 (73.024703ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-101526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (189.290885ms)

                                                
                                                
-- stdout --
	* [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:44:08.446600   71405 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:44:08.446803   71405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.446835   71405 out.go:374] Setting ErrFile to fd 2...
	I1205 06:44:08.446857   71405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.447138   71405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:44:08.447565   71405 out.go:368] Setting JSON to false
	I1205 06:44:08.448432   71405 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5195,"bootTime":1764911853,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:44:08.448539   71405 start.go:143] virtualization:  
	I1205 06:44:08.451826   71405 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:44:08.455565   71405 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:44:08.455667   71405 notify.go:221] Checking for updates...
	I1205 06:44:08.461439   71405 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:44:08.464321   71405 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:44:08.467174   71405 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:44:08.469984   71405 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:44:08.472951   71405 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:44:08.476413   71405 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:08.477047   71405 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:44:08.510132   71405 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:44:08.510248   71405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:08.566039   71405 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.55703439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:44:08.566143   71405 docker.go:319] overlay module found
	I1205 06:44:08.569230   71405 out.go:179] * Using the docker driver based on existing profile
	I1205 06:44:08.572047   71405 start.go:309] selected driver: docker
	I1205 06:44:08.572072   71405 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:08.572161   71405 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:44:08.575850   71405 out.go:203] 
	W1205 06:44:08.578797   71405 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:44:08.581601   71405 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101526 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-101526 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (199.868345ms)

                                                
                                                
-- stdout --
	* [functional-101526] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:44:08.254484   71356 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:44:08.254734   71356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.254767   71356 out.go:374] Setting ErrFile to fd 2...
	I1205 06:44:08.254788   71356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:44:08.255189   71356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:44:08.255614   71356 out.go:368] Setting JSON to false
	I1205 06:44:08.256505   71356 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5195,"bootTime":1764911853,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 06:44:08.256610   71356 start.go:143] virtualization:  
	I1205 06:44:08.259958   71356 out.go:179] * [functional-101526] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1205 06:44:08.263714   71356 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:44:08.263825   71356 notify.go:221] Checking for updates...
	I1205 06:44:08.269515   71356 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:44:08.272444   71356 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 06:44:08.275344   71356 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 06:44:08.278172   71356 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:44:08.281080   71356 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:44:08.284612   71356 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1205 06:44:08.285228   71356 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:44:08.319098   71356 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:44:08.319204   71356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:44:08.376792   71356 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:44:08.36732453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:44:08.376902   71356 docker.go:319] overlay module found
	I1205 06:44:08.379960   71356 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1205 06:44:08.382729   71356 start.go:309] selected driver: docker
	I1205 06:44:08.382745   71356 start.go:927] validating driver "docker" against &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:44:08.382856   71356 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:44:08.386321   71356 out.go:203] 
	W1205 06:44:08.389110   71356 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:44:08.391836   71356 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh -n functional-101526 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cp functional-101526:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3011102992/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh -n functional-101526 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh -n functional-101526 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4192/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo cat /etc/test/nested/copy/4192/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4192.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo cat /etc/ssl/certs/4192.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4192.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo cat /usr/share/ca-certificates/4192.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo cat /etc/ssl/certs/41922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo cat /usr/share/ca-certificates/41922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 ssh "sudo systemctl is-active docker": exit status 1 (268.231645ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 ssh "sudo systemctl is-active crio": exit status 1 (316.558905ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-101526 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "318.740575ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.425274ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "342.870663ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "52.795678ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo785096176/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.718647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:44:02.043920    4192 retry.go:31] will retry after 609.083764ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo785096176/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 ssh "sudo umount -f /mount-9p": exit status 1 (266.429159ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-101526 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo785096176/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-101526 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101526 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo13114672/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101526 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-101526
docker.io/kicbase/echo-server:functional-101526
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101526 image ls --format short --alsologtostderr:
I1205 06:44:20.974179   73579 out.go:360] Setting OutFile to fd 1 ...
I1205 06:44:20.974541   73579 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:20.974578   73579 out.go:374] Setting ErrFile to fd 2...
I1205 06:44:20.974608   73579 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:20.974911   73579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:44:20.975635   73579 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:20.975816   73579 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:20.976354   73579 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:44:20.993481   73579 ssh_runner.go:195] Run: systemctl --version
I1205 06:44:20.993542   73579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:44:21.016018   73579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:44:21.119724   73579 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101526 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-101526 │ sha256:45cf4c │ 831kB  │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.1               │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ sha256:d7b100 │ 265kB  │
│ registry.k8s.io/pause                       │ 3.3               │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest            │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-101526 │ sha256:da1050 │ 990B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ sha256:667491 │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ sha256:404c2e │ 22.4MB │
│ docker.io/kicbase/echo-server               │ functional-101526 │ sha256:ce2d2c │ 2.17MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101526 image ls --format table --alsologtostderr:
I1205 06:44:25.301409   73975 out.go:360] Setting OutFile to fd 1 ...
I1205 06:44:25.301620   73975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:25.301647   73975 out.go:374] Setting ErrFile to fd 2...
I1205 06:44:25.301667   73975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:25.301963   73975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:44:25.302595   73975 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:25.302765   73975 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:25.303311   73975 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:44:25.320984   73975 ssh_runner.go:195] Run: systemctl --version
I1205 06:44:25.321037   73975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:44:25.339644   73975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:44:25.443616   73975 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101526 image ls --format json --alsologtostderr:
[{"id":"sha256:45cf4ce823e1a01fadb98ac71f8d70653ba75c4b5dfbb62d6d91fbec65562b30","repoDigests":[],"repoTags":["localhost/my-image:functional-101526"],"size":"830617"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21166088"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20658969"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15389290"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id"
:"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-101526"],"size":"2173567"},{"id":"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8032639"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21134420"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24676285"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22428165"},{"id":"sha2
56:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"265458"},{"id":"sha256:da10500e63c801b54da78f8674131cdf4c08048aa0546512b5c303fbd1d46fc4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-101526"],"size":"990"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101526 image ls --format json --alsologtostderr:
I1205 06:44:25.077531   73939 out.go:360] Setting OutFile to fd 1 ...
I1205 06:44:25.077676   73939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:25.077701   73939 out.go:374] Setting ErrFile to fd 2...
I1205 06:44:25.077730   73939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:25.078005   73939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:44:25.078652   73939 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:25.078817   73939 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:25.079351   73939 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:44:25.097609   73939 ssh_runner.go:195] Run: systemctl --version
I1205 06:44:25.097666   73939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:44:25.115117   73939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:44:25.219752   73939 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101526 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-101526
size: "2173567"
- id: sha256:da10500e63c801b54da78f8674131cdf4c08048aa0546512b5c303fbd1d46fc4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-101526
size: "990"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24676285"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8032639"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21166088"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21134420"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20658969"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22428165"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15389290"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "265458"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101526 image ls --format yaml --alsologtostderr:
I1205 06:44:21.198112   73618 out.go:360] Setting OutFile to fd 1 ...
I1205 06:44:21.198382   73618 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:21.198399   73618 out.go:374] Setting ErrFile to fd 2...
I1205 06:44:21.198406   73618 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:21.198953   73618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:44:21.199632   73618 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:21.199803   73618 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:21.200379   73618 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:44:21.218275   73618 ssh_runner.go:195] Run: systemctl --version
I1205 06:44:21.218337   73618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:44:21.234790   73618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:44:21.335693   73618 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101526 ssh pgrep buildkitd: exit status 1 (262.410654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image build -t localhost/my-image:functional-101526 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-101526 image build -t localhost/my-image:functional-101526 testdata/build --alsologtostderr: (3.175105164s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101526 image build -t localhost/my-image:functional-101526 testdata/build --alsologtostderr:
I1205 06:44:21.686158   73724 out.go:360] Setting OutFile to fd 1 ...
I1205 06:44:21.686260   73724 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:21.686265   73724 out.go:374] Setting ErrFile to fd 2...
I1205 06:44:21.686271   73724 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:44:21.686565   73724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:44:21.687200   73724 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:21.687854   73724 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:44:21.688385   73724 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:44:21.705106   73724 ssh_runner.go:195] Run: systemctl --version
I1205 06:44:21.705202   73724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:44:21.721429   73724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:44:21.823422   73724 build_images.go:162] Building image from path: /tmp/build.1763020260.tar
I1205 06:44:21.823488   73724 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:44:21.830889   73724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1763020260.tar
I1205 06:44:21.834295   73724 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1763020260.tar: stat -c "%s %y" /var/lib/minikube/build/build.1763020260.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1763020260.tar': No such file or directory
I1205 06:44:21.834331   73724 ssh_runner.go:362] scp /tmp/build.1763020260.tar --> /var/lib/minikube/build/build.1763020260.tar (3072 bytes)
I1205 06:44:21.850874   73724 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1763020260
I1205 06:44:21.858182   73724 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1763020260 -xf /var/lib/minikube/build/build.1763020260.tar
I1205 06:44:21.865896   73724 containerd.go:394] Building image: /var/lib/minikube/build/build.1763020260
I1205 06:44:21.865984   73724 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1763020260 --local dockerfile=/var/lib/minikube/build/build.1763020260 --output type=image,name=localhost/my-image:functional-101526
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:14578a37497066f2c59acddefa830b77f8599b42578b7fe689c541a8c187c120
#8 exporting manifest sha256:14578a37497066f2c59acddefa830b77f8599b42578b7fe689c541a8c187c120 0.0s done
#8 exporting config sha256:45cf4ce823e1a01fadb98ac71f8d70653ba75c4b5dfbb62d6d91fbec65562b30 0.0s done
#8 naming to localhost/my-image:functional-101526 done
#8 DONE 0.2s
I1205 06:44:24.783465   73724 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1763020260 --local dockerfile=/var/lib/minikube/build/build.1763020260 --output type=image,name=localhost/my-image:functional-101526: (2.917449914s)
I1205 06:44:24.783529   73724 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1763020260
I1205 06:44:24.791291   73724 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1763020260.tar
I1205 06:44:24.798954   73724 build_images.go:218] Built localhost/my-image:functional-101526 from /tmp/build.1763020260.tar
I1205 06:44:24.798989   73724 build_images.go:134] succeeded building to: functional-101526
I1205 06:44:24.798995   73724 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-101526
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image load --daemon kicbase/echo-server:functional-101526 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls
E1205 06:44:14.019453    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image load --daemon kicbase/echo-server:functional-101526 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-101526
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image load --daemon kicbase/echo-server:functional-101526 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image save kicbase/echo-server:functional-101526 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image rm kicbase/echo-server:functional-101526 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-101526
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 image save --daemon kicbase/echo-server:functional-101526 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-101526
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-101526 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-101526
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-101526
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-101526
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (161.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1205 06:47:16.968402    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:16.974687    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:16.985998    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:17.007303    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:17.048636    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:17.129981    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:17.291413    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:17.613013    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:18.254354    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:19.535661    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:22.097886    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:27.219216    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:37.460809    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:47:57.942318    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:48:01.797764    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:48:38.904820    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m41.009672991s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (161.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 kubectl -- rollout status deployment/busybox: (4.652444533s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-krvqv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-lcgcq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-w6mbb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-krvqv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-lcgcq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-w6mbb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-krvqv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-lcgcq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-w6mbb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-krvqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-krvqv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-lcgcq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-lcgcq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-w6mbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 kubectl -- exec busybox-7b57f96db7-w6mbb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 node add --alsologtostderr -v 5
E1205 06:49:14.019526    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:50:00.827691    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 node add --alsologtostderr -v 5: (57.625703079s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5: (1.104713728s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-566224 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.039955486s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 status --output json --alsologtostderr -v 5: (1.047055059s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp testdata/cp-test.txt ha-566224:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2452107261/001/cp-test_ha-566224.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224:/home/docker/cp-test.txt ha-566224-m02:/home/docker/cp-test_ha-566224_ha-566224-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m02 "sudo cat /home/docker/cp-test_ha-566224_ha-566224-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224:/home/docker/cp-test.txt ha-566224-m03:/home/docker/cp-test_ha-566224_ha-566224-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m03 "sudo cat /home/docker/cp-test_ha-566224_ha-566224-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224:/home/docker/cp-test.txt ha-566224-m04:/home/docker/cp-test_ha-566224_ha-566224-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m04 "sudo cat /home/docker/cp-test_ha-566224_ha-566224-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp testdata/cp-test.txt ha-566224-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2452107261/001/cp-test_ha-566224-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m02:/home/docker/cp-test.txt ha-566224:/home/docker/cp-test_ha-566224-m02_ha-566224.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224 "sudo cat /home/docker/cp-test_ha-566224-m02_ha-566224.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m02:/home/docker/cp-test.txt ha-566224-m03:/home/docker/cp-test_ha-566224-m02_ha-566224-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m03 "sudo cat /home/docker/cp-test_ha-566224-m02_ha-566224-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m02:/home/docker/cp-test.txt ha-566224-m04:/home/docker/cp-test_ha-566224-m02_ha-566224-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m04 "sudo cat /home/docker/cp-test_ha-566224-m02_ha-566224-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp testdata/cp-test.txt ha-566224-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2452107261/001/cp-test_ha-566224-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m03:/home/docker/cp-test.txt ha-566224:/home/docker/cp-test_ha-566224-m03_ha-566224.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224 "sudo cat /home/docker/cp-test_ha-566224-m03_ha-566224.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m03:/home/docker/cp-test.txt ha-566224-m02:/home/docker/cp-test_ha-566224-m03_ha-566224-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m02 "sudo cat /home/docker/cp-test_ha-566224-m03_ha-566224-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m03:/home/docker/cp-test.txt ha-566224-m04:/home/docker/cp-test_ha-566224-m03_ha-566224-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m04 "sudo cat /home/docker/cp-test_ha-566224-m03_ha-566224-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp testdata/cp-test.txt ha-566224-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2452107261/001/cp-test_ha-566224-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m04:/home/docker/cp-test.txt ha-566224:/home/docker/cp-test_ha-566224-m04_ha-566224.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224 "sudo cat /home/docker/cp-test_ha-566224-m04_ha-566224.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m04:/home/docker/cp-test.txt ha-566224-m02:/home/docker/cp-test_ha-566224-m04_ha-566224-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m02 "sudo cat /home/docker/cp-test_ha-566224-m04_ha-566224-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 cp ha-566224-m04:/home/docker/cp-test.txt ha-566224-m03:/home/docker/cp-test_ha-566224-m04_ha-566224-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 ssh -n ha-566224-m03 "sudo cat /home/docker/cp-test_ha-566224-m04_ha-566224-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 node stop m02 --alsologtostderr -v 5: (12.136166888s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5: exit status 7 (878.244029ms)

                                                
                                                
-- stdout --
	ha-566224
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-566224-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-566224-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-566224-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:50:44.567835   91448 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:50:44.567959   91448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:50:44.567972   91448 out.go:374] Setting ErrFile to fd 2...
	I1205 06:50:44.567977   91448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:50:44.568220   91448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:50:44.568410   91448 out.go:368] Setting JSON to false
	I1205 06:50:44.568450   91448 mustload.go:66] Loading cluster: ha-566224
	I1205 06:50:44.568515   91448 notify.go:221] Checking for updates...
	I1205 06:50:44.569979   91448 config.go:182] Loaded profile config "ha-566224": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 06:50:44.570007   91448 status.go:174] checking status of ha-566224 ...
	I1205 06:50:44.570735   91448 cli_runner.go:164] Run: docker container inspect ha-566224 --format={{.State.Status}}
	I1205 06:50:44.598734   91448 status.go:371] ha-566224 host status = "Running" (err=<nil>)
	I1205 06:50:44.598759   91448 host.go:66] Checking if "ha-566224" exists ...
	I1205 06:50:44.599069   91448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-566224
	I1205 06:50:44.631391   91448 host.go:66] Checking if "ha-566224" exists ...
	I1205 06:50:44.631709   91448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:50:44.631762   91448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-566224
	I1205 06:50:44.655444   91448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/ha-566224/id_rsa Username:docker}
	I1205 06:50:44.758531   91448 ssh_runner.go:195] Run: systemctl --version
	I1205 06:50:44.765037   91448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:50:44.778174   91448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:50:44.847834   91448 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-05 06:50:44.837719596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:50:44.848367   91448 kubeconfig.go:125] found "ha-566224" server: "https://192.168.49.254:8443"
	I1205 06:50:44.848408   91448 api_server.go:166] Checking apiserver status ...
	I1205 06:50:44.848461   91448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:44.861779   91448 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	I1205 06:50:44.870245   91448 api_server.go:182] apiserver freezer: "12:freezer:/docker/03f5ad1cf02ce1b72922c2bba5f55ab49408079d68e346005acacb50e18505dc/kubepods/burstable/podc561cbe6603c20444e8b0d0bcbda873a/0ea879ef79d857bc304125ffd6593acb8c392679a9b679319d7b1b5eb6b9d8c1"
	I1205 06:50:44.870315   91448 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/03f5ad1cf02ce1b72922c2bba5f55ab49408079d68e346005acacb50e18505dc/kubepods/burstable/podc561cbe6603c20444e8b0d0bcbda873a/0ea879ef79d857bc304125ffd6593acb8c392679a9b679319d7b1b5eb6b9d8c1/freezer.state
	I1205 06:50:44.880497   91448 api_server.go:204] freezer state: "THAWED"
	I1205 06:50:44.880525   91448 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 06:50:44.891698   91448 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 06:50:44.891728   91448 status.go:463] ha-566224 apiserver status = Running (err=<nil>)
	I1205 06:50:44.891740   91448 status.go:176] ha-566224 status: &{Name:ha-566224 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:50:44.891764   91448 status.go:174] checking status of ha-566224-m02 ...
	I1205 06:50:44.892094   91448 cli_runner.go:164] Run: docker container inspect ha-566224-m02 --format={{.State.Status}}
	I1205 06:50:44.909738   91448 status.go:371] ha-566224-m02 host status = "Stopped" (err=<nil>)
	I1205 06:50:44.909769   91448 status.go:384] host is not running, skipping remaining checks
	I1205 06:50:44.909775   91448 status.go:176] ha-566224-m02 status: &{Name:ha-566224-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:50:44.909796   91448 status.go:174] checking status of ha-566224-m03 ...
	I1205 06:50:44.910106   91448 cli_runner.go:164] Run: docker container inspect ha-566224-m03 --format={{.State.Status}}
	I1205 06:50:44.926520   91448 status.go:371] ha-566224-m03 host status = "Running" (err=<nil>)
	I1205 06:50:44.926545   91448 host.go:66] Checking if "ha-566224-m03" exists ...
	I1205 06:50:44.926845   91448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-566224-m03
	I1205 06:50:44.944066   91448 host.go:66] Checking if "ha-566224-m03" exists ...
	I1205 06:50:44.944409   91448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:50:44.944456   91448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-566224-m03
	I1205 06:50:44.963209   91448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/ha-566224-m03/id_rsa Username:docker}
	I1205 06:50:45.081907   91448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:50:45.101686   91448 kubeconfig.go:125] found "ha-566224" server: "https://192.168.49.254:8443"
	I1205 06:50:45.101800   91448 api_server.go:166] Checking apiserver status ...
	I1205 06:50:45.101889   91448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:45.122583   91448 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup
	I1205 06:50:45.139196   91448 api_server.go:182] apiserver freezer: "12:freezer:/docker/3e833f02af57ee605fcab91bb58e9dddf4774f5680731099d17ed0fa08a91e41/kubepods/burstable/pod395b012790bcb413429f4c76f0a190d4/edb3326bd313327062c0316516f533ff1d53ed6cb640fc20cacdace126d80a69"
	I1205 06:50:45.139367   91448 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e833f02af57ee605fcab91bb58e9dddf4774f5680731099d17ed0fa08a91e41/kubepods/burstable/pod395b012790bcb413429f4c76f0a190d4/edb3326bd313327062c0316516f533ff1d53ed6cb640fc20cacdace126d80a69/freezer.state
	I1205 06:50:45.150388   91448 api_server.go:204] freezer state: "THAWED"
	I1205 06:50:45.150429   91448 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 06:50:45.160854   91448 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 06:50:45.160890   91448 status.go:463] ha-566224-m03 apiserver status = Running (err=<nil>)
	I1205 06:50:45.160902   91448 status.go:176] ha-566224-m03 status: &{Name:ha-566224-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:50:45.160925   91448 status.go:174] checking status of ha-566224-m04 ...
	I1205 06:50:45.161352   91448 cli_runner.go:164] Run: docker container inspect ha-566224-m04 --format={{.State.Status}}
	I1205 06:50:45.184944   91448 status.go:371] ha-566224-m04 host status = "Running" (err=<nil>)
	I1205 06:50:45.184974   91448 host.go:66] Checking if "ha-566224-m04" exists ...
	I1205 06:50:45.185382   91448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-566224-m04
	I1205 06:50:45.209041   91448 host.go:66] Checking if "ha-566224-m04" exists ...
	I1205 06:50:45.209454   91448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:50:45.209526   91448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-566224-m04
	I1205 06:50:45.245791   91448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/ha-566224-m04/id_rsa Username:docker}
	I1205 06:50:45.374888   91448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:50:45.388526   91448 status.go:176] ha-566224-m04 status: &{Name:ha-566224-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 node start m02 --alsologtostderr -v 5: (11.830206706s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5: (1.266231137s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (2.135193914s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 stop --alsologtostderr -v 5: (37.924837181s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 start --wait true --alsologtostderr -v 5
E1205 06:52:16.967869    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:52:17.088223    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 start --wait true --alsologtostderr -v 5: (1m4.413808998s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 node delete m03 --alsologtostderr -v 5
E1205 06:52:44.669413    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 node delete m03 --alsologtostderr -v 5: (10.327405996s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 stop --alsologtostderr -v 5
E1205 06:53:01.797668    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 stop --alsologtostderr -v 5: (36.242223479s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5: exit status 7 (108.762967ms)

                                                
                                                
-- stdout --
	ha-566224
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-566224-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-566224-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:53:32.601894  106324 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:53:32.602100  106324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:53:32.602134  106324 out.go:374] Setting ErrFile to fd 2...
	I1205 06:53:32.602157  106324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:53:32.602560  106324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 06:53:32.602852  106324 out.go:368] Setting JSON to false
	I1205 06:53:32.602910  106324 mustload.go:66] Loading cluster: ha-566224
	I1205 06:53:32.603734  106324 notify.go:221] Checking for updates...
	I1205 06:53:32.603980  106324 config.go:182] Loaded profile config "ha-566224": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 06:53:32.604038  106324 status.go:174] checking status of ha-566224 ...
	I1205 06:53:32.604654  106324 cli_runner.go:164] Run: docker container inspect ha-566224 --format={{.State.Status}}
	I1205 06:53:32.621924  106324 status.go:371] ha-566224 host status = "Stopped" (err=<nil>)
	I1205 06:53:32.621943  106324 status.go:384] host is not running, skipping remaining checks
	I1205 06:53:32.621961  106324 status.go:176] ha-566224 status: &{Name:ha-566224 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:53:32.621988  106324 status.go:174] checking status of ha-566224-m02 ...
	I1205 06:53:32.622283  106324 cli_runner.go:164] Run: docker container inspect ha-566224-m02 --format={{.State.Status}}
	I1205 06:53:32.645109  106324 status.go:371] ha-566224-m02 host status = "Stopped" (err=<nil>)
	I1205 06:53:32.645126  106324 status.go:384] host is not running, skipping remaining checks
	I1205 06:53:32.645133  106324 status.go:176] ha-566224-m02 status: &{Name:ha-566224-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:53:32.645151  106324 status.go:174] checking status of ha-566224-m04 ...
	I1205 06:53:32.645471  106324 cli_runner.go:164] Run: docker container inspect ha-566224-m04 --format={{.State.Status}}
	I1205 06:53:32.666467  106324 status.go:371] ha-566224-m04 host status = "Stopped" (err=<nil>)
	I1205 06:53:32.666488  106324 status.go:384] host is not running, skipping remaining checks
	I1205 06:53:32.666495  106324 status.go:176] ha-566224-m04 status: &{Name:ha-566224-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1205 06:54:14.019723    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.811488038s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 node add --control-plane --alsologtostderr -v 5: (1m13.188895445s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-566224 status --alsologtostderr -v 5: (1.101748829s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.054769597s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-065478 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1205 06:57:16.967921    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-065478 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m21.603285393s)
--- PASS: TestJSONOutput/start/Command (81.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-065478 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-065478 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-065478 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-065478 --output=json --user=testUser: (6.082703868s)
--- PASS: TestJSONOutput/stop/Command (6.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-295935 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-295935 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.333633ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18d46b1c-b6bf-4252-ac49-19fa0ae94152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-295935] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5c8a08d-a14b-43bd-b4a3-33c6e03a0831","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"3771eda0-d0f1-42c7-bcd9-ee70b6268a4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bda422bf-0c89-455c-9517-52d845ad08ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig"}}
	{"specversion":"1.0","id":"5df3fcbc-4626-4053-a9a9-f16306a3b547","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube"}}
	{"specversion":"1.0","id":"7d814690-f46f-4733-b867-bdafea5b4a54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d6b74436-825e-4213-ad87-c51c407188c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6f6e91e3-f126-4338-b1e9-c8c0c5e8362d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-295935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-295935
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-259592 --network=
E1205 06:58:01.797286    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-259592 --network=: (38.69844927s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-259592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-259592
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-259592: (2.242702966s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.96s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-826210 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-826210 --network=bridge: (33.254852401s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-826210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-826210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-826210: (2.061481058s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.34s)

                                                
                                    
x
+
TestKicExistingNetwork (36s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1205 06:58:48.891999    4192 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1205 06:58:48.908030    4192 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1205 06:58:48.908105    4192 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1205 06:58:48.908123    4192 cli_runner.go:164] Run: docker network inspect existing-network
W1205 06:58:48.923724    4192 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1205 06:58:48.923752    4192 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1205 06:58:48.923771    4192 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1205 06:58:48.923875    4192 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 06:58:48.940922    4192 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a985d692552 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:7f:02:3d:70:0e} reservation:<nil>}
I1205 06:58:48.941500    4192 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001608410}
I1205 06:58:48.941540    4192 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1205 06:58:48.941599    4192 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1205 06:58:49.004755    4192 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-781050 --network=existing-network
E1205 06:59:14.019741    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-781050 --network=existing-network: (33.714932521s)
helpers_test.go:175: Cleaning up "existing-network-781050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-781050
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-781050: (2.130187163s)
I1205 06:59:24.870127    4192 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.00s)

                                                
                                    
x
+
TestKicCustomSubnet (35.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-031876 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-031876 --subnet=192.168.60.0/24: (32.835925145s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-031876 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-031876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-031876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-031876: (2.378089354s)
--- PASS: TestKicCustomSubnet (35.24s)

                                                
                                    
x
+
TestKicStaticIP (36.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-925385 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-925385 --static-ip=192.168.200.200: (34.471041769s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-925385 ip
helpers_test.go:175: Cleaning up "static-ip-925385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-925385
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-925385: (2.214507461s)
--- PASS: TestKicStaticIP (36.87s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-458627 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-458627 --driver=docker  --container-runtime=containerd: (31.154132097s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-461416 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-461416 --driver=docker  --container-runtime=containerd: (32.469793595s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-458627
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-461416
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-461416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-461416
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-461416: (2.1711328s)
helpers_test.go:175: Cleaning up "first-458627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-458627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-458627: (2.066657301s)
--- PASS: TestMinikubeProfile (69.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-064758 --memory=3072 --mount-string /tmp/TestMountStartserial3918791304/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-064758 --memory=3072 --mount-string /tmp/TestMountStartserial3918791304/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.336562463s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-064758 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-066751 --memory=3072 --mount-string /tmp/TestMountStartserial3918791304/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-066751 --memory=3072 --mount-string /tmp/TestMountStartserial3918791304/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.428096465s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-066751 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-064758 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-064758 --alsologtostderr -v=5: (1.728564382s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-066751 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-066751
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-066751: (1.289442646s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-066751
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-066751: (6.452156718s)
--- PASS: TestMountStart/serial/RestartStopped (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-066751 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-941176 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1205 07:02:16.971524    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:02:44.872225    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:01.797722    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:40.032632    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-941176 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.429534225s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-941176 -- rollout status deployment/busybox: (4.835267256s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-2dqtq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-6vd6r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-2dqtq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-6vd6r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-2dqtq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-6vd6r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-2dqtq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-2dqtq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-6vd6r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-941176 -- exec busybox-7b57f96db7-6vd6r -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-941176 -v=5 --alsologtostderr
E1205 07:04:14.019824    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-941176 -v=5 --alsologtostderr: (26.585096802s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-941176 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp testdata/cp-test.txt multinode-941176:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile58608743/001/cp-test_multinode-941176.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176:/home/docker/cp-test.txt multinode-941176-m02:/home/docker/cp-test_multinode-941176_multinode-941176-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m02 "sudo cat /home/docker/cp-test_multinode-941176_multinode-941176-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176:/home/docker/cp-test.txt multinode-941176-m03:/home/docker/cp-test_multinode-941176_multinode-941176-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m03 "sudo cat /home/docker/cp-test_multinode-941176_multinode-941176-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp testdata/cp-test.txt multinode-941176-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile58608743/001/cp-test_multinode-941176-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176-m02:/home/docker/cp-test.txt multinode-941176:/home/docker/cp-test_multinode-941176-m02_multinode-941176.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176 "sudo cat /home/docker/cp-test_multinode-941176-m02_multinode-941176.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176-m02:/home/docker/cp-test.txt multinode-941176-m03:/home/docker/cp-test_multinode-941176-m02_multinode-941176-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m03 "sudo cat /home/docker/cp-test_multinode-941176-m02_multinode-941176-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp testdata/cp-test.txt multinode-941176-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile58608743/001/cp-test_multinode-941176-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176-m03:/home/docker/cp-test.txt multinode-941176:/home/docker/cp-test_multinode-941176-m03_multinode-941176.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176 "sudo cat /home/docker/cp-test_multinode-941176-m03_multinode-941176.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 cp multinode-941176-m03:/home/docker/cp-test.txt multinode-941176-m02:/home/docker/cp-test_multinode-941176-m03_multinode-941176-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 ssh -n multinode-941176-m02 "sudo cat /home/docker/cp-test_multinode-941176-m03_multinode-941176-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-941176 node stop m03: (1.316690573s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-941176 status: exit status 7 (553.344465ms)

                                                
                                                
-- stdout --
	multinode-941176
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-941176-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-941176-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-941176 status --alsologtostderr: exit status 7 (533.552273ms)

                                                
                                                
-- stdout --
	multinode-941176
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-941176-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-941176-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:04:53.104745  159551 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:04:53.104964  159551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:04:53.104996  159551 out.go:374] Setting ErrFile to fd 2...
	I1205 07:04:53.105027  159551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:04:53.105369  159551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:04:53.105607  159551 out.go:368] Setting JSON to false
	I1205 07:04:53.105670  159551 mustload.go:66] Loading cluster: multinode-941176
	I1205 07:04:53.105766  159551 notify.go:221] Checking for updates...
	I1205 07:04:53.106213  159551 config.go:182] Loaded profile config "multinode-941176": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:04:53.106252  159551 status.go:174] checking status of multinode-941176 ...
	I1205 07:04:53.106843  159551 cli_runner.go:164] Run: docker container inspect multinode-941176 --format={{.State.Status}}
	I1205 07:04:53.127641  159551 status.go:371] multinode-941176 host status = "Running" (err=<nil>)
	I1205 07:04:53.127662  159551 host.go:66] Checking if "multinode-941176" exists ...
	I1205 07:04:53.127965  159551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-941176
	I1205 07:04:53.161225  159551 host.go:66] Checking if "multinode-941176" exists ...
	I1205 07:04:53.161532  159551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:04:53.161577  159551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-941176
	I1205 07:04:53.180315  159551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/multinode-941176/id_rsa Username:docker}
	I1205 07:04:53.283559  159551 ssh_runner.go:195] Run: systemctl --version
	I1205 07:04:53.290359  159551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:04:53.303519  159551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:04:53.362632  159551 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-05 07:04:53.352141616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:04:53.363198  159551 kubeconfig.go:125] found "multinode-941176" server: "https://192.168.67.2:8443"
	I1205 07:04:53.363232  159551 api_server.go:166] Checking apiserver status ...
	I1205 07:04:53.363277  159551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:04:53.375584  159551 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	I1205 07:04:53.383831  159551 api_server.go:182] apiserver freezer: "12:freezer:/docker/c63abd4a8d17db589c2a1eeca3a29a8e8565cc92055763494466ab7f51b3a215/kubepods/burstable/pod1d6441fadff5f1376d5cfe4a2607059d/3f70bec213067a093c86beb09bd4dedb486af5f08d01eb5b268bb22d7e981bd8"
	I1205 07:04:53.383907  159551 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c63abd4a8d17db589c2a1eeca3a29a8e8565cc92055763494466ab7f51b3a215/kubepods/burstable/pod1d6441fadff5f1376d5cfe4a2607059d/3f70bec213067a093c86beb09bd4dedb486af5f08d01eb5b268bb22d7e981bd8/freezer.state
	I1205 07:04:53.392177  159551 api_server.go:204] freezer state: "THAWED"
	I1205 07:04:53.392213  159551 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1205 07:04:53.400446  159551 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1205 07:04:53.400476  159551 status.go:463] multinode-941176 apiserver status = Running (err=<nil>)
	I1205 07:04:53.400487  159551 status.go:176] multinode-941176 status: &{Name:multinode-941176 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:04:53.400532  159551 status.go:174] checking status of multinode-941176-m02 ...
	I1205 07:04:53.400871  159551 cli_runner.go:164] Run: docker container inspect multinode-941176-m02 --format={{.State.Status}}
	I1205 07:04:53.418256  159551 status.go:371] multinode-941176-m02 host status = "Running" (err=<nil>)
	I1205 07:04:53.418278  159551 host.go:66] Checking if "multinode-941176-m02" exists ...
	I1205 07:04:53.418582  159551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-941176-m02
	I1205 07:04:53.435484  159551 host.go:66] Checking if "multinode-941176-m02" exists ...
	I1205 07:04:53.435915  159551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:04:53.435968  159551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-941176-m02
	I1205 07:04:53.453539  159551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/multinode-941176-m02/id_rsa Username:docker}
	I1205 07:04:53.554310  159551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:04:53.566785  159551 status.go:176] multinode-941176-m02 status: &{Name:multinode-941176-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:04:53.566827  159551 status.go:174] checking status of multinode-941176-m03 ...
	I1205 07:04:53.567127  159551 cli_runner.go:164] Run: docker container inspect multinode-941176-m03 --format={{.State.Status}}
	I1205 07:04:53.583770  159551 status.go:371] multinode-941176-m03 host status = "Stopped" (err=<nil>)
	I1205 07:04:53.583794  159551 status.go:384] host is not running, skipping remaining checks
	I1205 07:04:53.583801  159551 status.go:176] multinode-941176-m03 status: &{Name:multinode-941176-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-941176 node start m03 -v=5 --alsologtostderr: (7.281817162s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-941176
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-941176
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-941176: (25.214484174s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-941176 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-941176 --wait=true -v=5 --alsologtostderr: (55.455190306s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-941176
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-941176 node delete m03: (5.048486775s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-941176 stop: (23.969620957s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-941176 status: exit status 7 (100.959098ms)

                                                
                                                
-- stdout --
	multinode-941176
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-941176-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-941176 status --alsologtostderr: exit status 7 (97.840247ms)

                                                
                                                
-- stdout --
	multinode-941176
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-941176-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:06:52.369439  168374 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:06:52.369616  168374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:52.369641  168374 out.go:374] Setting ErrFile to fd 2...
	I1205 07:06:52.369663  168374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:06:52.369956  168374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:06:52.370194  168374 out.go:368] Setting JSON to false
	I1205 07:06:52.370257  168374 mustload.go:66] Loading cluster: multinode-941176
	I1205 07:06:52.370346  168374 notify.go:221] Checking for updates...
	I1205 07:06:52.370728  168374 config.go:182] Loaded profile config "multinode-941176": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:06:52.370762  168374 status.go:174] checking status of multinode-941176 ...
	I1205 07:06:52.371398  168374 cli_runner.go:164] Run: docker container inspect multinode-941176 --format={{.State.Status}}
	I1205 07:06:52.390409  168374 status.go:371] multinode-941176 host status = "Stopped" (err=<nil>)
	I1205 07:06:52.390436  168374 status.go:384] host is not running, skipping remaining checks
	I1205 07:06:52.390444  168374 status.go:176] multinode-941176 status: &{Name:multinode-941176 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:06:52.390476  168374 status.go:174] checking status of multinode-941176-m02 ...
	I1205 07:06:52.390783  168374 cli_runner.go:164] Run: docker container inspect multinode-941176-m02 --format={{.State.Status}}
	I1205 07:06:52.420240  168374 status.go:371] multinode-941176-m02 host status = "Stopped" (err=<nil>)
	I1205 07:06:52.420262  168374 status.go:384] host is not running, skipping remaining checks
	I1205 07:06:52.420270  168374 status.go:176] multinode-941176-m02 status: &{Name:multinode-941176-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-941176 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1205 07:07:16.968173    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-941176 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (56.117682724s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-941176 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-941176
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-941176-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-941176-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.545846ms)

                                                
                                                
-- stdout --
	* [multinode-941176-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-941176-m02' is duplicated with machine name 'multinode-941176-m02' in profile 'multinode-941176'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-941176-m03 --driver=docker  --container-runtime=containerd
E1205 07:08:01.797381    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-941176-m03 --driver=docker  --container-runtime=containerd: (35.405439119s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-941176
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-941176: exit status 80 (344.923222ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-941176 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-941176-m03 already exists in multinode-941176-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-941176-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-941176-m03: (2.049920865s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.94s)

                                                
                                    
x
+
TestPreload (120.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-548442 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1205 07:08:57.090159    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:09:14.019500    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-548442 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (57.876966088s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-548442 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-548442 image pull gcr.io/k8s-minikube/busybox: (2.404877576s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-548442
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-548442: (5.934571338s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-548442 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-548442 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (51.699156961s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-548442 image list
helpers_test.go:175: Cleaning up "test-preload-548442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-548442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-548442: (2.372649692s)
--- PASS: TestPreload (120.52s)

                                                
                                    
x
+
TestScheduledStopUnix (109.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-371145 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-371145 --memory=3072 --driver=docker  --container-runtime=containerd: (33.805152853s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-371145 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:11:05.805873  184242 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:11:05.806014  184242 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:11:05.806024  184242 out.go:374] Setting ErrFile to fd 2...
	I1205 07:11:05.806032  184242 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:11:05.806320  184242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:11:05.806585  184242 out.go:368] Setting JSON to false
	I1205 07:11:05.806708  184242 mustload.go:66] Loading cluster: scheduled-stop-371145
	I1205 07:11:05.807107  184242 config.go:182] Loaded profile config "scheduled-stop-371145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:11:05.807204  184242 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/config.json ...
	I1205 07:11:05.807410  184242 mustload.go:66] Loading cluster: scheduled-stop-371145
	I1205 07:11:05.807534  184242 config.go:182] Loaded profile config "scheduled-stop-371145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-371145 -n scheduled-stop-371145
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-371145 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:11:06.309559  184335 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:11:06.309720  184335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:11:06.309728  184335 out.go:374] Setting ErrFile to fd 2...
	I1205 07:11:06.309733  184335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:11:06.310043  184335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:11:06.311030  184335 out.go:368] Setting JSON to false
	I1205 07:11:06.311382  184335 daemonize_unix.go:73] killing process 184258 as it is an old scheduled stop
	I1205 07:11:06.311455  184335 mustload.go:66] Loading cluster: scheduled-stop-371145
	I1205 07:11:06.311922  184335 config.go:182] Loaded profile config "scheduled-stop-371145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:11:06.312003  184335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/config.json ...
	I1205 07:11:06.312175  184335 mustload.go:66] Loading cluster: scheduled-stop-371145
	I1205 07:11:06.312284  184335 config.go:182] Loaded profile config "scheduled-stop-371145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1205 07:11:06.319925    4192 retry.go:31] will retry after 116.863µs: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.321070    4192 retry.go:31] will retry after 177.299µs: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.322196    4192 retry.go:31] will retry after 299.733µs: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.323275    4192 retry.go:31] will retry after 300.604µs: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.324343    4192 retry.go:31] will retry after 573.807µs: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.325457    4192 retry.go:31] will retry after 930.913µs: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.326576    4192 retry.go:31] will retry after 1.706984ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.328747    4192 retry.go:31] will retry after 1.088334ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.330943    4192 retry.go:31] will retry after 2.240024ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.334661    4192 retry.go:31] will retry after 4.847552ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.339887    4192 retry.go:31] will retry after 4.988586ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.345126    4192 retry.go:31] will retry after 12.890978ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.362145    4192 retry.go:31] will retry after 15.043076ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.377314    4192 retry.go:31] will retry after 26.133802ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
I1205 07:11:06.404603    4192 retry.go:31] will retry after 37.877954ms: open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-371145 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-371145 -n scheduled-stop-371145
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-371145
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-371145 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:11:32.249415  185017 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:11:32.249641  185017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:11:32.249673  185017 out.go:374] Setting ErrFile to fd 2...
	I1205 07:11:32.249696  185017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:11:32.249978  185017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:11:32.250279  185017 out.go:368] Setting JSON to false
	I1205 07:11:32.250416  185017 mustload.go:66] Loading cluster: scheduled-stop-371145
	I1205 07:11:32.250816  185017 config.go:182] Loaded profile config "scheduled-stop-371145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:11:32.250930  185017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/scheduled-stop-371145/config.json ...
	I1205 07:11:32.251142  185017 mustload.go:66] Loading cluster: scheduled-stop-371145
	I1205 07:11:32.251299  185017 config.go:182] Loaded profile config "scheduled-stop-371145": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1205 07:12:16.968749    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-371145
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-371145: exit status 7 (73.783135ms)

                                                
                                                
-- stdout --
	scheduled-stop-371145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-371145 -n scheduled-stop-371145
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-371145 -n scheduled-stop-371145: exit status 7 (65.033949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-371145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-371145
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-371145: (4.429909528s)
--- PASS: TestScheduledStopUnix (109.88s)

                                                
                                    
x
+
TestInsufficientStorage (12.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-698834 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-698834 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.858722122s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"63443a1e-d570-4ee2-a307-b2c8881b2f3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-698834] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f9d1230-88a5-49f3-99c4-8345f72ddbe0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"5704edcc-d687-4e88-a229-ddecd10d24ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9d8cfbc0-ff1a-4b86-824a-8ce41fceed46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig"}}
	{"specversion":"1.0","id":"2859bbe6-6148-4cfe-b57a-0d3570f9b6bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube"}}
	{"specversion":"1.0","id":"d4177c97-64ce-4dcf-9109-c57174eb7ace","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"362eb2be-6d79-4f39-aaa0-ce87fdd6d7b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b8ed4ac5-4524-49ea-a05b-e91a9502c268","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"51f2e5e4-7ac8-4cee-8122-a9a3ffe4ae24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"be6d728e-51c1-4713-81b5-ca3bbe739c54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d16e5ec-5f41-4de5-8078-c862416e93b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8fbbe9a3-aa79-4a93-b12f-e7785ec0cb3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-698834\" primary control-plane node in \"insufficient-storage-698834\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5927911e-569e-46b9-bfb6-5729fccb8e47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"34837796-0a16-448d-9715-03827535d935","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdca3de7-bd34-46f9-aadb-2505a018b584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-698834 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-698834 --output=json --layout=cluster: exit status 7 (296.994529ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-698834","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-698834","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:12:31.968779  186834 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-698834" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-698834 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-698834 --output=json --layout=cluster: exit status 7 (315.890017ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-698834","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-698834","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:12:32.283430  186901 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-698834" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
	E1205 07:12:32.293111  186901 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/insufficient-storage-698834/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-698834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-698834
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-698834: (1.95347629s)
--- PASS: TestInsufficientStorage (12.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (328.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.4248510683 start -p running-upgrade-217876 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.4248510683 start -p running-upgrade-217876 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.339467773s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-217876 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1205 07:22:16.968194    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:23:01.797388    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:24:14.019076    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:25:37.093283    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-217876 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m53.160215755s)
helpers_test.go:175: Cleaning up "running-upgrade-217876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-217876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-217876: (2.007648936s)
--- PASS: TestRunningBinaryUpgrade (328.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1527604216 start -p missing-upgrade-486753 --memory=3072 --driver=docker  --container-runtime=containerd
E1205 07:13:01.797874    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1527604216 start -p missing-upgrade-486753 --memory=3072 --driver=docker  --container-runtime=containerd: (1m10.920540455s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-486753
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-486753: (1.005828076s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-486753
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-486753 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-486753 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m42.319585647s)
helpers_test.go:175: Cleaning up "missing-upgrade-486753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-486753
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-486753: (2.141393475s)
--- PASS: TestMissingContainerUpgrade (177.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-912948 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-912948 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (87.994464ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-912948] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (52.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-912948 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-912948 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (52.140580128s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-912948 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (52.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-912948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-912948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.947299643s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-912948 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-912948 status -o json: exit status 2 (439.539663ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-912948","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-912948
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-912948: (2.523986526s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-912948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-912948 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.798790227s)
--- PASS: TestNoKubernetes/serial/Start (8.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-912948 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-912948 "sudo systemctl is-active --quiet service kubelet": exit status 1 (412.970904ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-912948
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-912948: (1.488596025s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-912948 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-912948 --driver=docker  --container-runtime=containerd: (6.568037686s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-912948 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-912948 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.834457ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (301.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.839709557 start -p stopped-upgrade-262727 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.839709557 start -p stopped-upgrade-262727 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.685622347s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.839709557 -p stopped-upgrade-262727 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.839709557 -p stopped-upgrade-262727 stop: (1.258367228s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-262727 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1205 07:17:16.967560    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:18:01.797344    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:14.019891    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:24.875364    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:20.034176    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-262727 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m27.985117273s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (301.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-262727
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-262727: (2.19294669s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.19s)

                                                
                                    
x
+
TestPause/serial/Start (82.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-557657 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-557657 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m22.965507746s)
--- PASS: TestPause/serial/Start (82.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-557657 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-557657 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.145028761s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.15s)

                                                
                                    
x
+
TestPause/serial/Pause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-557657 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-557657 --alsologtostderr -v=5: (1.082175667s)
--- PASS: TestPause/serial/Pause (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-557657 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-557657 --output=json --layout=cluster: exit status 2 (437.751105ms)

                                                
                                                
-- stdout --
	{"Name":"pause-557657","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-557657","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-557657 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-557657 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-557657 --alsologtostderr -v=5: (1.172570436s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-557657 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-557657 --alsologtostderr -v=5: (3.159786705s)
--- PASS: TestPause/serial/DeletePaused (3.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-557657
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-557657: exit status 1 (33.401055ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-557657: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-183381 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-183381 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (269.673641ms)

                                                
                                                
-- stdout --
	* [false-183381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:27:47.175664  249116 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:27:47.175824  249116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:27:47.175834  249116 out.go:374] Setting ErrFile to fd 2...
	I1205 07:27:47.175839  249116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:27:47.176079  249116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
	I1205 07:27:47.176868  249116 out.go:368] Setting JSON to false
	I1205 07:27:47.177693  249116 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7814,"bootTime":1764911853,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 07:27:47.177780  249116 start.go:143] virtualization:  
	I1205 07:27:47.181610  249116 out.go:179] * [false-183381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:27:47.184729  249116 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:27:47.184823  249116 notify.go:221] Checking for updates...
	I1205 07:27:47.191490  249116 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:27:47.194368  249116 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
	I1205 07:27:47.197270  249116 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
	I1205 07:27:47.201278  249116 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:27:47.204960  249116 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:27:47.209469  249116 config.go:182] Loaded profile config "force-systemd-env-788551": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1205 07:27:47.209580  249116 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:27:47.245993  249116 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:27:47.246123  249116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:27:47.349255  249116 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2025-12-05 07:27:47.337631457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:27:47.349359  249116 docker.go:319] overlay module found
	I1205 07:27:47.355183  249116 out.go:179] * Using the docker driver based on user configuration
	I1205 07:27:47.358452  249116 start.go:309] selected driver: docker
	I1205 07:27:47.358473  249116 start.go:927] validating driver "docker" against <nil>
	I1205 07:27:47.358485  249116 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:27:47.362312  249116 out.go:203] 
	W1205 07:27:47.365658  249116 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1205 07:27:47.368748  249116 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-183381 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-183381" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-183381

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-183381"

                                                
                                                
----------------------- debugLogs end: false-183381 [took: 4.822602558s] --------------------------------
helpers_test.go:175: Cleaning up "false-183381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-183381
--- PASS: TestNetworkPlugins/group/false (5.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-943366 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1205 07:29:14.019664    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-943366 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m3.16109041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-943366 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [70078fe8-bc54-4d00-9709-9600256e3034] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [70078fe8-bc54-4d00-9709-9600256e3034] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0042382s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-943366 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-943366 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-943366 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.084243229s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-943366 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-943366 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-943366 --alsologtostderr -v=3: (12.129094506s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-943366 -n old-k8s-version-943366
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-943366 -n old-k8s-version-943366: exit status 7 (67.03667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-943366 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-943366 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-943366 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (53.319400868s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-943366 -n old-k8s-version-943366
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gb6dt" [2f457c94-1ab0-4353-b1c8-6fb83202af32] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003001586s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gb6dt" [2f457c94-1ab0-4353-b1c8-6fb83202af32] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003566687s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-943366 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-943366 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-943366 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-943366 -n old-k8s-version-943366
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-943366 -n old-k8s-version-943366: exit status 2 (379.567739ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-943366 -n old-k8s-version-943366
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-943366 -n old-k8s-version-943366: exit status 2 (394.684245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-943366 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-943366 -n old-k8s-version-943366
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-943366 -n old-k8s-version-943366
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m29.529091403s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1205 07:32:16.968092    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:33:01.797489    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m26.321554648s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-083143 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [295b3efa-bb7c-4d1c-8a8e-2b39473789f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [295b3efa-bb7c-4d1c-8a8e-2b39473789f3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003125795s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-083143 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-861489 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9fd67b2c-1e68-4856-a571-61efa824cf3f] Pending
helpers_test.go:352: "busybox" [9fd67b2c-1e68-4856-a571-61efa824cf3f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9fd67b2c-1e68-4856-a571-61efa824cf3f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004442403s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-861489 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-083143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-083143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.029218806s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-083143 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-083143 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-083143 --alsologtostderr -v=3: (12.109225803s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-861489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-861489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003709507s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-861489 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-861489 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-861489 --alsologtostderr -v=3: (12.087721316s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143: exit status 7 (65.690104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-083143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-083143 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (56.251102159s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-861489 -n embed-certs-861489
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-861489 -n embed-certs-861489: exit status 7 (75.092363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-861489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1205 07:34:14.019757    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-861489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (56.063116349s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-861489 -n embed-certs-861489
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sjnl5" [cd66b325-bce3-4ba3-ad48-c01d5617ff95] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003232342s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2b457" [42297d41-7013-48fb-b2ca-3961925367d4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002888793s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sjnl5" [cd66b325-bce3-4ba3-ad48-c01d5617ff95] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003498443s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-083143 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2b457" [42297d41-7013-48fb-b2ca-3961925367d4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010431671s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-861489 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-083143 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-083143 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143: exit status 2 (348.8424ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143: exit status 2 (415.976266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-083143 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083143 -n default-k8s-diff-port-083143
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-861489 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-861489 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-861489 --alsologtostderr -v=1: (1.481951424s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-861489 -n embed-certs-861489
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-861489 -n embed-certs-861489: exit status 2 (432.667384ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-861489 -n embed-certs-861489
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-861489 -n embed-certs-861489: exit status 2 (473.8148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-861489 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-861489 -n embed-certs-861489
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-861489 -n embed-certs-861489
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-241270 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-241270 --alsologtostderr -v=3: (1.306985617s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241270 -n no-preload-241270: exit status 7 (66.895786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-241270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-622440 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-622440 --alsologtostderr -v=3: (1.295112692s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-622440 -n newest-cni-622440: exit status 7 (68.428946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-622440 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-622440 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m20.700424797s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-183381 "pgrep -a kubelet"
I1205 07:53:14.270861    4192 config.go:182] Loaded profile config "auto-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-183381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c9w2d" [b7fa6422-bd2a-44f3-b043-76687275d10e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c9w2d" [b7fa6422-bd2a-44f3-b043-76687275d10e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004071325s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-183381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (49.219408612s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5xg7h" [5ac885f0-af54-488b-a71b-76eec1053f32] Running
E1205 07:54:34.371512    4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/default-k8s-diff-port-083143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005151668s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-183381 "pgrep -a kubelet"
I1205 07:54:40.343990    4192 config.go:182] Loaded profile config "kindnet-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-183381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8scw6" [b492477b-4164-4fec-95ce-672d85ab6b6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8scw6" [b492477b-4164-4fec-95ce-672d85ab6b6c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003979864s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-183381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m4.408243688s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-shtjr" [82d2bff2-6383-476c-a6d0-eebaae01fa88] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003950137s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-183381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-183381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mtv4s" [2794e7eb-ccb9-4e3e-9f69-2c968d251c17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mtv4s" [2794e7eb-ccb9-4e3e-9f69-2c968d251c17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003940237s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-183381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (56.717309791s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-183381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-183381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6g7v5" [3e4c3792-a24a-49aa-b78b-eda2eb6927b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6g7v5" [3e4c3792-a24a-49aa-b78b-eda2eb6927b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.018677255s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-183381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (44.552216147s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-183381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-183381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9hfh8" [f72e204f-54b2-41f9-aa43-6e247b06153c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9hfh8" [f72e204f-54b2-41f9-aa43-6e247b06153c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003415443s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-183381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m2.093536407s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nznwv" [ac06d9f6-a841-4802-8abc-cf9cfb10b0f0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003568516s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-183381 "pgrep -a kubelet"
I1205 08:00:46.190716    4192 config.go:182] Loaded profile config "flannel-183381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-183381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dkcch" [820b4315-9e65-4123-b3a8-5c2f0a73cbb2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dkcch" [820b4315-9e65-4123-b3a8-5c2f0a73cbb2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004100074s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-183381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-183381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m14.043820149s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-183381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-183381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sl9ml" [b15d886b-389c-4030-a718-77f4bb36679b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sl9ml" [b15d886b-389c-4030-a718-77f4bb36679b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003775441s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-183381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-183381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (37/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.16
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.43
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.29
389 TestNetworkPlugins/group/kubenet 5.28
400 TestNetworkPlugins/group/cilium 5.52
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1205 06:05:27.392438    4192 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
W1205 06:05:27.504762    4192 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
W1205 06:05:27.551548    4192 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-992506 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-992506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-992506
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-358601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-358601
--- SKIP: TestStartStop/group/disable-driver-mounts (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-183381 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-183381" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-557657
contexts:
- context:
cluster: pause-557657
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-557657
name: pause-557657
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-557657
user:
client-certificate: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/client.crt
client-key: /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/pause-557657/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-183381

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-183381"

                                                
                                                
----------------------- debugLogs end: kubenet-183381 [took: 5.041349444s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-183381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-183381
--- SKIP: TestNetworkPlugins/group/kubenet (5.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-183381 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-183381" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-183381

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-183381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183381"

                                                
                                                
----------------------- debugLogs end: cilium-183381 [took: 5.315982479s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-183381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-183381
--- SKIP: TestNetworkPlugins/group/cilium (5.52s)

                                                
                                    
Copied to clipboard